1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,442 @@
# Elasticsearch Output Plugin
This plugin writes metrics to [Elasticsearch][elasticsearch] via HTTP using the
[Elastic client library][client_lib]. The plugin supports Elasticsearch
releases from v5.x up to v7.x.
⭐ Telegraf v0.1.5
🏷️ datastore, logging
💻 all
[elasticsearch]: https://www.elastic.co
[client_lib]: http://olivere.github.io/elastic/
## Elasticsearch indexes and templates
### Indexes per time-frame
This plugin can manage indexes per time-frame, as commonly done in other tools
with Elasticsearch.
The timestamp of the metric collected will be used to decide the index
destination.
For more information about this usage on Elasticsearch, check [the
docs][1].
[1]: https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe
### Template management
Index templates are used in Elasticsearch to define settings and mappings for
the indexes and how the fields should be analyzed. For more information on how
this works, see [the docs][2].
This plugin can create a working template for use with telegraf metrics. It uses
Elasticsearch dynamic templates feature to set proper types for the tags and
metrics fields. If the template specified already exists, it will not overwrite
unless you configure this plugin to do so. Thus you can customize this template
after its creation if necessary.
Example of an index template created by telegraf on Elasticsearch 5.x:
```json
{
"order": 0,
"template": "telegraf-*",
"settings": {
"index": {
"mapping": {
"total_fields": {
"limit": "5000"
}
},
"auto_expand_replicas" : "0-1",
"codec" : "best_compression",
"refresh_interval": "10s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"tags": {
"path_match": "tag.*",
"mapping": {
"ignore_above": 512,
"type": "keyword"
},
"match_mapping_type": "string"
}
},
{
"metrics_long": {
"mapping": {
"index": false,
"type": "float"
},
"match_mapping_type": "long"
}
},
{
"metrics_double": {
"mapping": {
"index": false,
"type": "float"
},
"match_mapping_type": "double"
}
},
{
"text_fields": {
"mapping": {
"norms": false
},
"match": "*"
}
}
],
"_all": {
"enabled": false
},
"properties": {
"@timestamp": {
"type": "date"
},
"measurement_name": {
"type": "keyword"
}
}
}
},
"aliases": {}
}
```
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
### Example events
This plugin will format the events in the following way:
```json
{
"@timestamp": "2017-01-01T00:00:00+00:00",
"measurement_name": "cpu",
"cpu": {
"usage_guest": 0,
"usage_guest_nice": 0,
"usage_idle": 71.85413456197966,
"usage_iowait": 0.256805341656516,
"usage_irq": 0,
"usage_nice": 0,
"usage_softirq": 0.2054442732579466,
"usage_steal": 0,
"usage_system": 15.04879301548127,
"usage_user": 12.634822807288275
},
"tag": {
"cpu": "cpu-total",
"host": "elastichost",
"dc": "datacenter1"
}
}
```
```json
{
"@timestamp": "2017-01-01T00:00:00+00:00",
"measurement_name": "system",
"system": {
"load1": 0.78,
"load15": 0.8,
"load5": 0.8,
"n_cpus": 2,
"n_users": 2
},
"tag": {
"host": "elastichost",
"dc": "datacenter1"
}
}
```
### Timestamp Timezone
Elasticsearch documents use RFC3339 timestamps, which include timezone
information (for example `2017-01-01T00:00:00-08:00`). By default, the Telegraf
system's configured timezone will be used.
However, this may not always be desirable: Elasticsearch preserves timezone
information and includes it when returning associated documents. This can cause
issues for some pipelines. In particular, those that do not parse retrieved
timestamps and instead assume that the timezone returned will always be
consistent.
Telegraf honours the timezone configured in the environment variable `TZ`, so
the timezone sent to Elasticsearch can be amended without needing to change the
timezone configured in the host system:
```sh
export TZ="America/Los_Angeles"
export TZ="UTC"
```
If Telegraf is being run as a system service, this can be configured in the
following way on Linux:
```sh
echo TZ="UTC" | sudo tee -a /etc/default/telegraf
```
## OpenSearch Support
OpenSearch is a fork of Elasticsearch hosted by AWS. The OpenSearch server will
report itself to clients with an AWS specific-version (e.g. v1.0). In reality,
the actual underlying Elasticsearch version is v7.1. This breaks Telegraf and
other Elasticsearch clients that need to know what major version they are
interfacing with.
Amazon has created a [compatibility mode][3] to allow existing Elasticsearch
clients to properly work when the version needs to be checked. To enable
compatibility mode users need to set the `override_main_response_version` to
`true`.
On existing clusters run:
```json
PUT /_cluster/settings
{
"persistent" : {
"compatibility.override_main_response_version" : true
}
}
```
And on new clusters set the option to true under advanced options:
```json
POST https://es.us-east-1.amazonaws.com/2021-01-01/opensearch/upgradeDomain
{
"DomainName": "domain-name",
"TargetVersion": "OpenSearch_1.0",
"AdvancedOptions": {
"override_main_response_version": "true"
}
}
```
[3]: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html#rename-upgrade
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Secret-store support
This plugin supports secrets from secret-stores for the `username`,
`password` and `auth_bearer_token` option.
See the [secret-store documentation][SECRETSTORE] for more details on how
to use them.
[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
## Configuration
```toml @sample.conf
# Configuration for Elasticsearch to send metrics to.
[[outputs.elasticsearch]]
## The full HTTP endpoint URL for your Elasticsearch instance
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval
urls = [ "http://node1.es.example.com:9200" ] # required.
## Elasticsearch client timeout, defaults to "5s" if not set.
timeout = "5s"
## Set to true to ask Elasticsearch a list of all cluster nodes,
## thus it is not necessary to list all nodes in the urls config option
enable_sniffer = false
## Set to true to enable gzip compression
enable_gzip = false
## Set the interval to check if the Elasticsearch nodes are available
## Setting to "0s" will disable the health check (not recommended in production)
health_check_interval = "10s"
## Set the timeout for periodic health checks.
# health_check_timeout = "1s"
## HTTP basic authentication details.
## HTTP basic authentication details
# username = "telegraf"
# password = "mypassword"
## HTTP bearer token authentication details
# auth_bearer_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"
## Index Config
## The target index for metrics (Elasticsearch will create if it not exists).
## You can use the date specifiers below to create indexes per time frame.
## The metric timestamp will be used to decide the destination index name
# %Y - year (2016)
# %y - last two digits of year (00..99)
# %m - month (01..12)
# %d - day of month (e.g., 01)
# %H - hour (00..23)
# %V - week of the year (ISO week) (01..53)
## Additionally, you can specify a tag name using the notation {{tag_name}}
## which will be used as part of the index name. If the tag does not exist,
## the default tag value will be used.
# index_name = "telegraf-{{host}}-%Y.%m.%d"
# default_tag_value = "none"
index_name = "telegraf-%Y.%m.%d" # required.
## Optional Index Config
## Set to true if Telegraf should use the "create" OpType while indexing
# use_optype_create = false
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Template Config
## Set to true if you want telegraf to manage its index template.
## If enabled it will create a recommended index template for telegraf indexes
manage_template = true
## The template name used for telegraf indexes
template_name = "telegraf"
## Set to true if you want telegraf to overwrite an existing template
overwrite_template = false
## If set to true a unique ID hash will be sent as sha256(concat(timestamp,measurement,series-hash)) string
## it will enable data resend and update metric points avoiding duplicated metrics with different id's
force_document_id = false
## Specifies the handling of NaN and Inf values.
## This option can have the following values:
## none -- do not modify field-values (default); will produce an error if NaNs or infs are encountered
## drop -- drop fields containing NaNs or infs
## replace -- replace with the value in "float_replacement_value" (default: 0.0)
## NaNs and inf will be replaced with the given number, -inf with the negative of that number
# float_handling = "none"
# float_replacement_value = 0.0
## Pipeline Config
## To use a ingest pipeline, set this to the name of the pipeline you want to use.
# use_pipeline = "my_pipeline"
## Additionally, you can specify a tag name using the notation {{tag_name}}
## which will be used as part of the pipeline name. If the tag does not exist,
## the default pipeline will be used as the pipeline. If no default pipeline is set,
## no pipeline is used for the metric.
# use_pipeline = "{{es_pipeline}}"
# default_pipeline = "my_pipeline"
#
# Custom HTTP headers
# To pass custom HTTP headers please define it in a given below section
# [outputs.elasticsearch.headers]
# "X-Custom-Header" = "custom-value"
## Template Index Settings
## Overrides the template settings.index section with any provided options.
## Defaults provided here in the config
# template_index_settings = {
# refresh_interval = "10s",
# mapping.total_fields.limit = 5000,
# auto_expand_replicas = "0-1",
# codec = "best_compression"
# }
```
### Permissions
If you are using authentication within your Elasticsearch cluster, you need to
create a account and create a role with at least the manage role in the Cluster
Privileges category. Otherwise, your account will not be able to connect to
your Elasticsearch cluster and send logs to your cluster. After that, you need
to add "create_indice" and "write" permission to your specific index pattern.
### Required parameters
* `urls`: A list containing the full HTTP URL of one or more nodes from your
Elasticsearch instance.
* `index_name`: The target index for metrics. You can use the date specifiers
below to create indexes per time frame.
``` %Y - year (2017)
%y - last two digits of year (00..99)
%m - month (01..12)
%d - day of month (e.g., 01)
%H - hour (00..23)
%V - week of the year (ISO week) (01..53)
```
Additionally, you can specify dynamic index names by using tags with the
notation ```{{tag_name}}```. This will store the metrics with different tag
values in different indices. If the tag does not exist in a particular metric,
the `default_tag_value` will be used instead.
### Optional parameters
* `timeout`: Elasticsearch client timeout, defaults to "5s" if not set.
* `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster
nodes, thus it is not necessary to list all nodes in the urls config option.
* `health_check_interval`: Set the interval to check if the nodes are available,
in seconds. Setting to 0 will disable the health check (not recommended in
production).
* `username`: The username for HTTP basic authentication details (eg. when using
Shield).
* `password`: The password for HTTP basic authentication details (eg. when using
Shield).
* `manage_template`: Set to true if you want telegraf to manage its index
template. If enabled it will create a recommended index template for telegraf
indexes.
* `template_name`: The template name used for telegraf indexes.
* `overwrite_template`: Set to true if you want telegraf to overwrite an
existing template.
* `force_document_id`: Set to true will compute a unique hash from as
sha256(concat(timestamp,measurement,series-hash)),enables resend or update
data without ES duplicated documents.
* `float_handling`: Specifies how to handle `NaN` and infinite field
values. `"none"` (default) will do nothing, `"drop"` will drop the field and
`replace` will replace the field value by the number in
`float_replacement_value`
* `float_replacement_value`: Value (defaulting to `0.0`) to replace `NaN`s and
`inf`s if `float_handling` is set to `replace`. Negative `inf` will be
replaced by the negative value in this number to respect the sign of the
field's original value.
* `use_optype_create`: If set, the "create" operation type will be used when
indexing into Elasticsearch, which is needed when using the Elasticsearch
data streams feature.
* `use_pipeline`: If set, the set value will be used as the pipeline to call
when sending events to elasticsearch. Additionally, you can specify dynamic
pipeline names by using tags with the notation ```{{tag_name}}```. If the tag
does not exist in a particular metric, the `default_pipeline` will be used
instead.
* `default_pipeline`: If dynamic pipeline names the tag does not exist in a
particular metric, this value will be used instead.
* `headers`: Custom HTTP headers, which are passed to Elasticsearch header
before each request.
## Known issues
Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in
this exact same window of their negative counterparts) are encoded by golang
JSON encoder in decimal format and that is not fully supported by Elasticsearch
dynamic field mapping. This causes the metrics with such values to be dropped in
case a field mapping has not been created yet on the telegraf index. If that's
the case you will see an exception on Elasticsearch side like this:
```json
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"}},"status":400}
```
The correct field mapping will be created on the telegraf index as soon as a
supported JSON value is received by Elasticsearch, and subsequent insertions
will work because the field mapping will already exist.
This issue is caused by the way Elasticsearch tries to detect integer fields,
and by how golang encodes numbers in JSON. There is no clear workaround for this
at the moment.

View file

@ -0,0 +1,546 @@
//go:generate ../../../tools/readme_config_includer/generator
package elasticsearch
import (
"bytes"
"context"
"crypto/sha256"
_ "embed"
"encoding/json"
"errors"
"fmt"
"math"
"net/http"
"net/url"
"strconv"
"strings"
"text/template"
"time"
"github.com/olivere/elastic"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/plugins/common/tls"
"github.com/influxdata/telegraf/plugins/outputs"
)
//go:embed sample.conf
var sampleConfig string
type Elasticsearch struct {
AuthBearerToken config.Secret `toml:"auth_bearer_token"`
DefaultPipeline string `toml:"default_pipeline"`
DefaultTagValue string `toml:"default_tag_value"`
EnableGzip bool `toml:"enable_gzip"`
EnableSniffer bool `toml:"enable_sniffer"`
FloatHandling string `toml:"float_handling"`
FloatReplacement float64 `toml:"float_replacement_value"`
ForceDocumentID bool `toml:"force_document_id"`
HealthCheckInterval config.Duration `toml:"health_check_interval"`
HealthCheckTimeout config.Duration `toml:"health_check_timeout"`
IndexName string `toml:"index_name"`
IndexTemplate map[string]interface{} `toml:"template_index_settings"`
ManageTemplate bool `toml:"manage_template"`
OverwriteTemplate bool `toml:"overwrite_template"`
UseOpTypeCreate bool `toml:"use_optype_create"`
Username config.Secret `toml:"username"`
Password config.Secret `toml:"password"`
TemplateName string `toml:"template_name"`
Timeout config.Duration `toml:"timeout"`
URLs []string `toml:"urls"`
UsePipeline string `toml:"use_pipeline"`
Headers map[string]string `toml:"headers"`
Log telegraf.Logger `toml:"-"`
majorReleaseNumber int
pipelineName string
pipelineTagKeys []string
tagKeys []string
tls.ClientConfig
Client *elastic.Client
}
const telegrafTemplate = `
{
{{ if (lt .Version 6) }}
"template": "{{.TemplatePattern}}",
{{ else }}
"index_patterns" : [ "{{.TemplatePattern}}" ],
{{ end }}
"settings": {
"index": {{.IndexTemplate}}
},
"mappings" : {
{{ if (lt .Version 7) }}
"metrics" : {
{{ if (lt .Version 6) }}
"_all": { "enabled": false },
{{ end }}
{{ end }}
"properties" : {
"@timestamp" : { "type" : "date" },
"measurement_name" : { "type" : "keyword" }
},
"dynamic_templates": [
{
"tags": {
"match_mapping_type": "string",
"path_match": "tag.*",
"mapping": {
"ignore_above": 512,
"type": "keyword"
}
}
},
{
"metrics_long": {
"match_mapping_type": "long",
"mapping": {
"type": "float",
"index": false
}
}
},
{
"metrics_double": {
"match_mapping_type": "double",
"mapping": {
"type": "float",
"index": false
}
}
},
{
"text_fields": {
"match": "*",
"mapping": {
"norms": false
}
}
}
]
{{ if (lt .Version 7) }}
}
{{ end }}
}
}`
const defaultTemplateIndexSettings = `
{
"refresh_interval": "10s",
"mapping.total_fields.limit": 5000,
"auto_expand_replicas": "0-1",
"codec": "best_compression"
}`
type templatePart struct {
TemplatePattern string
Version int
IndexTemplate string
}
func (*Elasticsearch) SampleConfig() string {
return sampleConfig
}
func (a *Elasticsearch) Connect() error {
if a.URLs == nil || a.IndexName == "" {
return errors.New("elasticsearch urls or index_name is not defined")
}
// Determine if we should process NaN and inf values
switch a.FloatHandling {
case "", "none":
a.FloatHandling = "none"
case "drop", "replace":
default:
return fmt.Errorf("invalid float_handling type %q", a.FloatHandling)
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(a.Timeout))
defer cancel()
var clientOptions []elastic.ClientOptionFunc
tlsCfg, err := a.ClientConfig.TLSConfig()
if err != nil {
return err
}
tr := &http.Transport{
TLSClientConfig: tlsCfg,
}
httpclient := &http.Client{
Transport: tr,
Timeout: time.Duration(a.Timeout),
}
elasticURL, err := url.Parse(a.URLs[0])
if err != nil {
return fmt.Errorf("parsing URL failed: %w", err)
}
clientOptions = append(clientOptions,
elastic.SetHttpClient(httpclient),
elastic.SetSniff(a.EnableSniffer),
elastic.SetScheme(elasticURL.Scheme),
elastic.SetURL(a.URLs...),
elastic.SetHealthcheckInterval(time.Duration(a.HealthCheckInterval)),
elastic.SetHealthcheckTimeout(time.Duration(a.HealthCheckTimeout)),
elastic.SetGzip(a.EnableGzip),
)
if len(a.Headers) > 0 {
headers := http.Header{}
for k, vals := range a.Headers {
for _, v := range strings.Split(vals, ",") {
headers.Add(k, v)
}
}
clientOptions = append(clientOptions, elastic.SetHeaders(headers))
}
authOptions, err := a.getAuthOptions()
if err != nil {
return err
}
clientOptions = append(clientOptions, authOptions...)
if time.Duration(a.HealthCheckInterval) == 0 {
clientOptions = append(clientOptions,
elastic.SetHealthcheck(false),
)
a.Log.Debugf("Disabling health check")
}
client, err := elastic.NewClient(clientOptions...)
if err != nil {
return err
}
// check for ES version on first node
esVersion, err := client.ElasticsearchVersion(a.URLs[0])
if err != nil {
return fmt.Errorf("elasticsearch version check failed: %w", err)
}
// quit if ES version is not supported
majorReleaseNumber, err := strconv.Atoi(strings.Split(esVersion, ".")[0])
if err != nil || majorReleaseNumber < 5 {
return fmt.Errorf("elasticsearch version not supported: %s", esVersion)
}
a.Log.Infof("Elasticsearch version: %q", esVersion)
a.Client = client
a.majorReleaseNumber = majorReleaseNumber
if a.ManageTemplate {
err := a.manageTemplate(ctx)
if err != nil {
return err
}
}
a.IndexName, a.tagKeys = GetTagKeys(a.IndexName)
a.pipelineName, a.pipelineTagKeys = GetTagKeys(a.UsePipeline)
return nil
}
// GetPointID generates a unique ID for a Metric Point
func GetPointID(m telegraf.Metric) string {
var buffer bytes.Buffer
// Timestamp(ns),measurement name and Series Hash for compute the final SHA256 based hash ID
buffer.WriteString(strconv.FormatInt(m.Time().Local().UnixNano(), 10))
buffer.WriteString(m.Name())
buffer.WriteString(strconv.FormatUint(m.HashID(), 10))
return fmt.Sprintf("%x", sha256.Sum256(buffer.Bytes()))
}
func (a *Elasticsearch) Write(metrics []telegraf.Metric) error {
if len(metrics) == 0 {
return nil
}
bulkRequest := a.Client.Bulk()
for _, metric := range metrics {
var name = metric.Name()
// index name has to be re-evaluated each time for telegraf
// to send the metric to the correct time-based index
indexName := a.GetIndexName(a.IndexName, metric.Time(), a.tagKeys, metric.Tags())
// Handle NaN and inf field-values
fields := make(map[string]interface{})
for k, value := range metric.Fields() {
v, ok := value.(float64)
if !ok || a.FloatHandling == "none" || !(math.IsNaN(v) || math.IsInf(v, 0)) {
fields[k] = value
continue
}
if a.FloatHandling == "drop" {
continue
}
if math.IsNaN(v) || math.IsInf(v, 1) {
fields[k] = a.FloatReplacement
} else {
fields[k] = -a.FloatReplacement
}
}
m := make(map[string]interface{})
m["@timestamp"] = metric.Time()
m["measurement_name"] = name
m["tag"] = metric.Tags()
m[name] = fields
br := elastic.NewBulkIndexRequest().Index(indexName).Doc(m)
if a.UseOpTypeCreate {
br.OpType("create")
}
if a.ForceDocumentID {
id := GetPointID(metric)
br.Id(id)
}
if a.majorReleaseNumber <= 6 {
br.Type("metrics")
}
if a.UsePipeline != "" {
if pipelineName := a.getPipelineName(a.pipelineName, a.pipelineTagKeys, metric.Tags()); pipelineName != "" {
br.Pipeline(pipelineName)
}
}
bulkRequest.Add(br)
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(a.Timeout))
defer cancel()
res, err := bulkRequest.Do(ctx)
if err != nil {
return fmt.Errorf("error sending bulk request to Elasticsearch: %w", err)
}
if res.Errors {
for id, err := range res.Failed() {
a.Log.Errorf(
"Elasticsearch indexing failure, id: %d, status: %d, error: %s, caused by: %s, %s",
id,
err.Status,
err.Error.Reason,
err.Error.CausedBy["reason"],
err.Error.CausedBy["type"],
)
break
}
return fmt.Errorf("elasticsearch failed to index %d metrics", len(res.Failed()))
}
return nil
}
func (a *Elasticsearch) manageTemplate(ctx context.Context) error {
if a.TemplateName == "" {
return errors.New("elasticsearch template_name configuration not defined")
}
templateExists, errExists := a.Client.IndexTemplateExists(a.TemplateName).Do(ctx)
if errExists != nil {
return fmt.Errorf("elasticsearch template check failed, template name: %s, error: %w", a.TemplateName, errExists)
}
templatePattern := a.IndexName
if strings.Contains(templatePattern, "%") {
templatePattern = templatePattern[0:strings.Index(templatePattern, "%")]
}
if strings.Contains(templatePattern, "{{") {
templatePattern = templatePattern[0:strings.Index(templatePattern, "{{")]
}
if templatePattern == "" {
return errors.New("template cannot be created for dynamic index names without an index prefix")
}
if (a.OverwriteTemplate) || (!templateExists) || (templatePattern != "") {
data, err := a.createNewTemplate(templatePattern)
if err != nil {
return err
}
_, errCreateTemplate := a.Client.IndexPutTemplate(a.TemplateName).BodyString(data.String()).Do(ctx)
if errCreateTemplate != nil {
return fmt.Errorf("elasticsearch failed to create index template %s: %w", a.TemplateName, errCreateTemplate)
}
a.Log.Debugf("Template %s created or updated\n", a.TemplateName)
} else {
a.Log.Debug("Found existing Elasticsearch template. Skipping template management")
}
return nil
}
func (a *Elasticsearch) createNewTemplate(templatePattern string) (*bytes.Buffer, error) {
var indexTemplate string
if a.IndexTemplate != nil {
data, err := json.Marshal(&a.IndexTemplate)
if err != nil {
return nil, fmt.Errorf("elasticsearch failed to create index settings for template %s: %w", a.TemplateName, err)
}
indexTemplate = string(data)
} else {
indexTemplate = defaultTemplateIndexSettings
}
tp := templatePart{
TemplatePattern: templatePattern + "*",
Version: a.majorReleaseNumber,
IndexTemplate: indexTemplate,
}
t := template.Must(template.New("template").Parse(telegrafTemplate))
var tmpl bytes.Buffer
if err := t.Execute(&tmpl, tp); err != nil {
return nil, err
}
return &tmpl, nil
}
func GetTagKeys(indexName string) (string, []string) {
tagKeys := make([]string, 0)
startTag := strings.Index(indexName, "{{")
for startTag >= 0 {
endTag := strings.Index(indexName, "}}")
if endTag < 0 {
startTag = -1
} else {
tagName := indexName[startTag+2 : endTag]
var tagReplacer = strings.NewReplacer(
"{{"+tagName+"}}", "%s",
)
indexName = tagReplacer.Replace(indexName)
tagKeys = append(tagKeys, strings.TrimSpace(tagName))
startTag = strings.Index(indexName, "{{")
}
}
return indexName, tagKeys
}
func (a *Elasticsearch) GetIndexName(indexName string, eventTime time.Time, tagKeys []string, metricTags map[string]string) string {
if strings.Contains(indexName, "%") {
var dateReplacer = strings.NewReplacer(
"%Y", eventTime.UTC().Format("2006"),
"%y", eventTime.UTC().Format("06"),
"%m", eventTime.UTC().Format("01"),
"%d", eventTime.UTC().Format("02"),
"%H", eventTime.UTC().Format("15"),
"%V", getISOWeek(eventTime.UTC()),
)
indexName = dateReplacer.Replace(indexName)
}
tagValues := make([]interface{}, 0, len(tagKeys))
for _, key := range tagKeys {
if value, ok := metricTags[key]; ok {
tagValues = append(tagValues, value)
} else {
a.Log.Debugf("Tag %q not found, using %q on index name instead\n", key, a.DefaultTagValue)
tagValues = append(tagValues, a.DefaultTagValue)
}
}
return fmt.Sprintf(indexName, tagValues...)
}
func (a *Elasticsearch) getPipelineName(pipelineInput string, tagKeys []string, metricTags map[string]string) string {
if !strings.Contains(pipelineInput, "%") || len(tagKeys) == 0 {
return pipelineInput
}
var tagValues []interface{}
for _, key := range tagKeys {
if value, ok := metricTags[key]; ok {
tagValues = append(tagValues, value)
continue
}
a.Log.Debugf("Tag %s not found, reverting to default pipeline instead.", key)
return a.DefaultPipeline
}
return fmt.Sprintf(pipelineInput, tagValues...)
}
func getISOWeek(eventTime time.Time) string {
_, week := eventTime.ISOWeek()
return strconv.Itoa(week)
}
func (a *Elasticsearch) Close() error {
a.Client = nil
return nil
}
func (a *Elasticsearch) getAuthOptions() ([]elastic.ClientOptionFunc, error) {
var fns []elastic.ClientOptionFunc
if !a.Username.Empty() && !a.Password.Empty() {
username, err := a.Username.Get()
if err != nil {
return nil, fmt.Errorf("getting username failed: %w", err)
}
password, err := a.Password.Get()
if err != nil {
username.Destroy()
return nil, fmt.Errorf("getting password failed: %w", err)
}
fns = append(fns, elastic.SetBasicAuth(username.String(), password.String()))
username.Destroy()
password.Destroy()
}
if !a.AuthBearerToken.Empty() {
token, err := a.AuthBearerToken.Get()
if err != nil {
return nil, fmt.Errorf("getting token failed: %w", err)
}
auth := []string{"Bearer " + token.String()}
fns = append(fns, elastic.SetHeaders(http.Header{"Authorization": auth}))
token.Destroy()
}
return fns, nil
}
func init() {
outputs.Add("elasticsearch", func() telegraf.Output {
return &Elasticsearch{
Timeout: config.Duration(time.Second * 5),
HealthCheckInterval: config.Duration(time.Second * 10),
HealthCheckTimeout: config.Duration(time.Second * 1),
}
})
}

View file

@ -0,0 +1,850 @@
package elasticsearch
import (
"context"
"encoding/json"
"fmt"
"math"
"net/http"
"net/http/httptest"
"reflect"
"testing"
"time"
"github.com/docker/go-connections/nat"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go/wait"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/testutil"
)
const servicePort = "9200"
func launchTestContainer(t *testing.T) *testutil.Container {
container := testutil.Container{
Image: "elasticsearch:6.8.23",
ExposedPorts: []string{servicePort},
Env: map[string]string{
"discovery.type": "single-node",
},
WaitingFor: wait.ForAll(
wait.ForLog("] mode [basic] - valid"),
wait.ForListeningPort(nat.Port(servicePort)),
),
}
err := container.Start()
require.NoError(t, err, "failed to start container")
return &container
}
func TestConnectAndWriteIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
e := &Elasticsearch{
URLs: urls,
IndexName: "test-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
EnableGzip: true,
ManageTemplate: true,
TemplateName: "telegraf",
OverwriteTemplate: false,
HealthCheckInterval: config.Duration(time.Second * 10),
HealthCheckTimeout: config.Duration(time.Second * 1),
Log: testutil.Logger{},
}
// Verify that we can connect to Elasticsearch
err := e.Connect()
require.NoError(t, err)
// Verify that we can successfully write data to Elasticsearch
err = e.Write(testutil.MockMetrics())
require.NoError(t, err)
}
func TestConnectAndWriteMetricWithNaNValueEmptyIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
e := &Elasticsearch{
URLs: urls,
IndexName: "test-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
ManageTemplate: true,
TemplateName: "telegraf",
OverwriteTemplate: false,
HealthCheckInterval: config.Duration(time.Second * 10),
HealthCheckTimeout: config.Duration(time.Second * 1),
Log: testutil.Logger{},
}
metrics := []telegraf.Metric{
testutil.TestMetric(math.NaN()),
testutil.TestMetric(math.Inf(1)),
testutil.TestMetric(math.Inf(-1)),
}
// Verify that we can connect to Elasticsearch
err := e.Connect()
require.NoError(t, err)
// Verify that we can fail for metric with unhandled NaN/inf/-inf values
for _, m := range metrics {
err = e.Write([]telegraf.Metric{m})
require.Error(t, err, "error sending bulk request to Elasticsearch: json: unsupported value: NaN")
}
}
func TestConnectAndWriteMetricWithNaNValueNoneIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
e := &Elasticsearch{
URLs: urls,
IndexName: "test-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
ManageTemplate: true,
TemplateName: "telegraf",
OverwriteTemplate: false,
HealthCheckInterval: config.Duration(time.Second * 10),
HealthCheckTimeout: config.Duration(time.Second * 1),
FloatHandling: "none",
Log: testutil.Logger{},
}
metrics := []telegraf.Metric{
testutil.TestMetric(math.NaN()),
testutil.TestMetric(math.Inf(1)),
testutil.TestMetric(math.Inf(-1)),
}
// Verify that we can connect to Elasticsearch
err := e.Connect()
require.NoError(t, err)
// Verify that we can fail for metric with unhandled NaN/inf/-inf values
for _, m := range metrics {
err = e.Write([]telegraf.Metric{m})
require.Error(t, err, "error sending bulk request to Elasticsearch: json: unsupported value: NaN")
}
}
func TestConnectAndWriteMetricWithNaNValueDropIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
e := &Elasticsearch{
URLs: urls,
IndexName: "test-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
ManageTemplate: true,
TemplateName: "telegraf",
OverwriteTemplate: false,
HealthCheckInterval: config.Duration(time.Second * 10),
HealthCheckTimeout: config.Duration(time.Second * 1),
FloatHandling: "drop",
Log: testutil.Logger{},
}
metrics := []telegraf.Metric{
testutil.TestMetric(math.NaN()),
testutil.TestMetric(math.Inf(1)),
testutil.TestMetric(math.Inf(-1)),
}
// Verify that we can connect to Elasticsearch
err := e.Connect()
require.NoError(t, err)
// Verify that we can fail for metric with unhandled NaN/inf/-inf values
for _, m := range metrics {
err = e.Write([]telegraf.Metric{m})
require.NoError(t, err)
}
}
func TestConnectAndWriteMetricWithNaNValueReplacementIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
tests := []struct {
floatHandle string
floatReplacement float64
expectError bool
}{
{
"none",
0.0,
true,
},
{
"drop",
0.0,
false,
},
{
"replace",
0.0,
false,
},
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
for _, test := range tests {
e := &Elasticsearch{
URLs: urls,
IndexName: "test-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
ManageTemplate: true,
TemplateName: "telegraf",
OverwriteTemplate: false,
HealthCheckInterval: config.Duration(time.Second * 10),
HealthCheckTimeout: config.Duration(time.Second * 1),
FloatHandling: test.floatHandle,
FloatReplacement: test.floatReplacement,
Log: testutil.Logger{},
}
metrics := []telegraf.Metric{
testutil.TestMetric(math.NaN()),
testutil.TestMetric(math.Inf(1)),
testutil.TestMetric(math.Inf(-1)),
}
err := e.Connect()
require.NoError(t, err)
for _, m := range metrics {
err = e.Write([]telegraf.Metric{m})
if test.expectError {
require.Error(t, err)
} else {
require.NoError(t, err)
}
}
}
}
func TestTemplateManagementEmptyTemplateIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
e := &Elasticsearch{
URLs: urls,
IndexName: "test-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
EnableGzip: true,
ManageTemplate: true,
TemplateName: "",
OverwriteTemplate: true,
Log: testutil.Logger{},
}
err := e.manageTemplate(t.Context())
require.Error(t, err)
}
func TestUseOpTypeCreate(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
e := &Elasticsearch{
URLs: urls,
IndexName: "test-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
EnableGzip: true,
ManageTemplate: true,
TemplateName: "telegraf",
OverwriteTemplate: true,
UseOpTypeCreate: true,
Log: testutil.Logger{},
}
ctx, cancel := context.WithTimeout(t.Context(), time.Duration(e.Timeout))
defer cancel()
metrics := []telegraf.Metric{
testutil.TestMetric(1),
}
err := e.Connect()
require.NoError(t, err)
err = e.manageTemplate(ctx)
require.NoError(t, err)
// Verify that we can fail for metric with unhandled NaN/inf/-inf values
for _, m := range metrics {
err = e.Write([]telegraf.Metric{m})
require.NoError(t, err)
}
}
func TestTemplateManagementIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
e := &Elasticsearch{
URLs: urls,
IndexName: "test-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
EnableGzip: true,
ManageTemplate: true,
TemplateName: "telegraf",
OverwriteTemplate: true,
Log: testutil.Logger{},
}
ctx, cancel := context.WithTimeout(t.Context(), time.Duration(e.Timeout))
defer cancel()
err := e.Connect()
require.NoError(t, err)
err = e.manageTemplate(ctx)
require.NoError(t, err)
}
func TestTemplateInvalidIndexPatternIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
container := launchTestContainer(t)
defer container.Terminate()
urls := []string{
fmt.Sprintf("http://%s:%s", container.Address, container.Ports[servicePort]),
}
e := &Elasticsearch{
URLs: urls,
IndexName: "{{host}}-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
EnableGzip: true,
ManageTemplate: true,
TemplateName: "telegraf",
OverwriteTemplate: true,
Log: testutil.Logger{},
}
err := e.Connect()
require.Error(t, err)
}
func TestGetTagKeys(t *testing.T) {
tests := []struct {
IndexName string
ExpectedIndexName string
ExpectedTagKeys []string
}{
{
IndexName: "indexname",
ExpectedIndexName: "indexname",
ExpectedTagKeys: make([]string, 0),
}, {
IndexName: "indexname-%Y",
ExpectedIndexName: "indexname-%Y",
ExpectedTagKeys: make([]string, 0),
}, {
IndexName: "indexname-%Y-%m",
ExpectedIndexName: "indexname-%Y-%m",
ExpectedTagKeys: make([]string, 0),
}, {
IndexName: "indexname-%Y-%m-%d",
ExpectedIndexName: "indexname-%Y-%m-%d",
ExpectedTagKeys: make([]string, 0),
}, {
IndexName: "indexname-%Y-%m-%d-%H",
ExpectedIndexName: "indexname-%Y-%m-%d-%H",
ExpectedTagKeys: make([]string, 0),
}, {
IndexName: "indexname-%y-%m",
ExpectedIndexName: "indexname-%y-%m",
ExpectedTagKeys: make([]string, 0),
}, {
IndexName: "indexname-{{tag1}}-%y-%m",
ExpectedIndexName: "indexname-%s-%y-%m",
ExpectedTagKeys: []string{"tag1"},
}, {
IndexName: "indexname-{{tag1}}-{{tag2}}-%y-%m",
ExpectedIndexName: "indexname-%s-%s-%y-%m",
ExpectedTagKeys: []string{"tag1", "tag2"},
}, {
IndexName: "indexname-{{tag1}}-{{tag2}}-{{tag3}}-%y-%m",
ExpectedIndexName: "indexname-%s-%s-%s-%y-%m",
ExpectedTagKeys: []string{"tag1", "tag2", "tag3"},
},
}
for _, test := range tests {
indexName, tagKeys := GetTagKeys(test.IndexName)
if indexName != test.ExpectedIndexName {
t.Errorf("Expected indexname %s, got %s\n", test.ExpectedIndexName, indexName)
}
if !reflect.DeepEqual(tagKeys, test.ExpectedTagKeys) {
t.Errorf("Expected tagKeys %s, got %s\n", test.ExpectedTagKeys, tagKeys)
}
}
}
func TestGetIndexName(t *testing.T) {
e := &Elasticsearch{
DefaultTagValue: "none",
Log: testutil.Logger{},
}
tests := []struct {
EventTime time.Time
Tags map[string]string
TagKeys []string
IndexName string
Expected string
}{
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
IndexName: "indexname",
Expected: "indexname",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
IndexName: "indexname-%Y",
Expected: "indexname-2014",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
IndexName: "indexname-%Y-%m",
Expected: "indexname-2014-12",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
IndexName: "indexname-%Y-%m-%d",
Expected: "indexname-2014-12-01",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
IndexName: "indexname-%Y-%m-%d-%H",
Expected: "indexname-2014-12-01-23",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
IndexName: "indexname-%y-%m",
Expected: "indexname-14-12",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
IndexName: "indexname-%Y-%V",
Expected: "indexname-2014-49",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
TagKeys: []string{"tag1"},
IndexName: "indexname-%s-%y-%m",
Expected: "indexname-value1-14-12",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
TagKeys: []string{"tag1", "tag2"},
IndexName: "indexname-%s-%s-%y-%m",
Expected: "indexname-value1-value2-14-12",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
TagKeys: []string{"tag1", "tag2", "tag3"},
IndexName: "indexname-%s-%s-%s-%y-%m",
Expected: "indexname-value1-value2-none-14-12",
},
}
for _, test := range tests {
indexName := e.GetIndexName(test.IndexName, test.EventTime, test.TagKeys, test.Tags)
if indexName != test.Expected {
t.Errorf("Expected indexname %s, got %s\n", test.Expected, indexName)
}
}
}
func TestGetPipelineName(t *testing.T) {
e := &Elasticsearch{
UsePipeline: "{{es-pipeline}}",
DefaultPipeline: "myDefaultPipeline",
Log: testutil.Logger{},
}
e.pipelineName, e.pipelineTagKeys = GetTagKeys(e.UsePipeline)
tests := []struct {
EventTime time.Time
Tags map[string]string
PipelineTagKeys []string
Expected string
}{
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
Expected: "myDefaultPipeline",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
Expected: "myDefaultPipeline",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "es-pipeline": "myOtherPipeline"},
Expected: "myOtherPipeline",
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "es-pipeline": "pipeline2"},
Expected: "pipeline2",
},
}
for _, test := range tests {
pipelineName := e.getPipelineName(e.pipelineName, e.pipelineTagKeys, test.Tags)
require.Equal(t, test.Expected, pipelineName)
}
// Setup testing for testing no pipeline set. All the tests in this case should return "".
e = &Elasticsearch{
Log: testutil.Logger{},
}
e.pipelineName, e.pipelineTagKeys = GetTagKeys(e.UsePipeline)
for _, test := range tests {
pipelineName := e.getPipelineName(e.pipelineName, e.pipelineTagKeys, test.Tags)
require.Empty(t, pipelineName)
}
}
func TestPipelineConfigs(t *testing.T) {
tests := []struct {
EventTime time.Time
Tags map[string]string
PipelineTagKeys []string
Expected string
Elastic *Elasticsearch
}{
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
Elastic: &Elasticsearch{
Log: testutil.Logger{},
},
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "tag2": "value2"},
Elastic: &Elasticsearch{
DefaultPipeline: "myDefaultPipeline",
Log: testutil.Logger{},
},
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "es-pipeline": "myOtherPipeline"},
Expected: "myDefaultPipeline",
Elastic: &Elasticsearch{
UsePipeline: "myDefaultPipeline",
Log: testutil.Logger{},
},
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "es-pipeline": "pipeline2"},
Elastic: &Elasticsearch{
DefaultPipeline: "myDefaultPipeline",
Log: testutil.Logger{},
},
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "es-pipeline": "pipeline2"},
Expected: "pipeline2",
Elastic: &Elasticsearch{
UsePipeline: "{{es-pipeline}}",
Log: testutil.Logger{},
},
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1", "es-pipeline": "pipeline2"},
Expected: "value1-pipeline2",
Elastic: &Elasticsearch{
UsePipeline: "{{tag1}}-{{es-pipeline}}",
Log: testutil.Logger{},
},
},
{
EventTime: time.Date(2014, 12, 01, 23, 30, 00, 00, time.UTC),
Tags: map[string]string{"tag1": "value1"},
Elastic: &Elasticsearch{
UsePipeline: "{{es-pipeline}}",
Log: testutil.Logger{},
},
},
}
for _, test := range tests {
e := test.Elastic
e.pipelineName, e.pipelineTagKeys = GetTagKeys(e.UsePipeline)
pipelineName := e.getPipelineName(e.pipelineName, e.pipelineTagKeys, test.Tags)
require.Equal(t, test.Expected, pipelineName)
}
}
func TestRequestHeaderWhenGzipIsEnabled(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/_bulk":
if contentHeader := r.Header.Get("Content-Encoding"); contentHeader != "gzip" {
w.WriteHeader(http.StatusInternalServerError)
t.Errorf("Not equal, expected: %q, actual: %q", "gzip", contentHeader)
return
}
if acceptHeader := r.Header.Get("Accept-Encoding"); acceptHeader != "gzip" {
w.WriteHeader(http.StatusInternalServerError)
t.Errorf("Not equal, expected: %q, actual: %q", "gzip", acceptHeader)
return
}
if _, err := w.Write([]byte("{}")); err != nil {
w.WriteHeader(http.StatusInternalServerError)
t.Error(err)
}
return
default:
if _, err := w.Write([]byte(`{"version": {"number": "7.8"}}`)); err != nil {
w.WriteHeader(http.StatusInternalServerError)
t.Error(err)
}
return
}
}))
defer ts.Close()
urls := []string{"http://" + ts.Listener.Addr().String()}
e := &Elasticsearch{
URLs: urls,
IndexName: "{{host}}-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
EnableGzip: true,
ManageTemplate: false,
Log: testutil.Logger{},
}
err := e.Connect()
require.NoError(t, err)
err = e.Write(testutil.MockMetrics())
require.NoError(t, err)
}
func TestRequestHeaderWhenGzipIsDisabled(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/_bulk":
if contentHeader := r.Header.Get("Content-Encoding"); contentHeader == "gzip" {
w.WriteHeader(http.StatusInternalServerError)
t.Errorf("Not equal, expected: %q, actual: %q", "gzip", contentHeader)
return
}
if _, err := w.Write([]byte("{}")); err != nil {
w.WriteHeader(http.StatusInternalServerError)
t.Error(err)
}
return
default:
if _, err := w.Write([]byte(`{"version": {"number": "7.8"}}`)); err != nil {
w.WriteHeader(http.StatusInternalServerError)
t.Error(err)
}
return
}
}))
defer ts.Close()
urls := []string{"http://" + ts.Listener.Addr().String()}
e := &Elasticsearch{
URLs: urls,
IndexName: "{{host}}-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
EnableGzip: false,
ManageTemplate: false,
Log: testutil.Logger{},
}
err := e.Connect()
require.NoError(t, err)
err = e.Write(testutil.MockMetrics())
require.NoError(t, err)
}
func TestAuthorizationHeaderWhenBearerTokenIsPresent(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/_bulk":
if authHeader := r.Header.Get("Authorization"); authHeader != "Bearer 0123456789abcdef" {
w.WriteHeader(http.StatusInternalServerError)
t.Errorf("Not equal, expected: %q, actual: %q", "Bearer 0123456789abcdef", authHeader)
return
}
if _, err := w.Write([]byte("{}")); err != nil {
w.WriteHeader(http.StatusInternalServerError)
t.Error(err)
}
return
default:
if _, err := w.Write([]byte(`{"version": {"number": "7.8"}}`)); err != nil {
w.WriteHeader(http.StatusInternalServerError)
t.Error(err)
}
return
}
}))
defer ts.Close()
urls := []string{"http://" + ts.Listener.Addr().String()}
e := &Elasticsearch{
URLs: urls,
IndexName: "{{host}}-%Y.%m.%d",
Timeout: config.Duration(time.Second * 5),
EnableGzip: false,
ManageTemplate: false,
Log: testutil.Logger{},
AuthBearerToken: config.NewSecret([]byte("0123456789abcdef")),
}
err := e.Connect()
require.NoError(t, err)
err = e.Write(testutil.MockMetrics())
require.NoError(t, err)
}
func TestStandardIndexSettings(t *testing.T) {
e := &Elasticsearch{
TemplateName: "test",
IndexName: "telegraf-%Y.%m.%d",
Log: testutil.Logger{},
}
buf, err := e.createNewTemplate("test")
require.NoError(t, err)
var jsonData esTemplate
err = json.Unmarshal(buf.Bytes(), &jsonData)
require.NoError(t, err)
index := jsonData.Settings.Index
require.Equal(t, "10s", index["refresh_interval"])
require.InDelta(t, float64(5000), index["mapping.total_fields.limit"], testutil.DefaultDelta)
require.Equal(t, "0-1", index["auto_expand_replicas"])
require.Equal(t, "best_compression", index["codec"])
}
func TestDifferentIndexSettings(t *testing.T) {
e := &Elasticsearch{
TemplateName: "test",
IndexName: "telegraf-%Y.%m.%d",
IndexTemplate: map[string]interface{}{
"refresh_interval": "20s",
"mapping.total_fields.limit": 1000,
"codec": "best_compression",
},
Log: testutil.Logger{},
}
buf, err := e.createNewTemplate("test")
require.NoError(t, err)
var jsonData esTemplate
err = json.Unmarshal(buf.Bytes(), &jsonData)
require.NoError(t, err)
index := jsonData.Settings.Index
require.Equal(t, "20s", index["refresh_interval"])
require.InDelta(t, float64(1000), index["mapping.total_fields.limit"], testutil.DefaultDelta)
require.Equal(t, "best_compression", index["codec"])
}
type esTemplate struct {
Settings esSettings `json:"settings"`
}
type esSettings struct {
Index map[string]interface{} `json:"index"`
}

View file

@ -0,0 +1,98 @@
# Configuration for Elasticsearch to send metrics to.
[[outputs.elasticsearch]]
## The full HTTP endpoint URL for your Elasticsearch instance
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval
urls = [ "http://node1.es.example.com:9200" ] # required.
## Elasticsearch client timeout, defaults to "5s" if not set.
timeout = "5s"
## Set to true to ask Elasticsearch a list of all cluster nodes,
## thus it is not necessary to list all nodes in the urls config option
enable_sniffer = false
## Set to true to enable gzip compression
enable_gzip = false
## Set the interval to check if the Elasticsearch nodes are available
## Setting to "0s" will disable the health check (not recommended in production)
health_check_interval = "10s"
## Set the timeout for periodic health checks.
# health_check_timeout = "1s"
## HTTP basic authentication details.
## HTTP basic authentication details
# username = "telegraf"
# password = "mypassword"
## HTTP bearer token authentication details
# auth_bearer_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"
## Index Config
## The target index for metrics (Elasticsearch will create if it not exists).
## You can use the date specifiers below to create indexes per time frame.
## The metric timestamp will be used to decide the destination index name
# %Y - year (2016)
# %y - last two digits of year (00..99)
# %m - month (01..12)
# %d - day of month (e.g., 01)
# %H - hour (00..23)
# %V - week of the year (ISO week) (01..53)
## Additionally, you can specify a tag name using the notation {{tag_name}}
## which will be used as part of the index name. If the tag does not exist,
## the default tag value will be used.
# index_name = "telegraf-{{host}}-%Y.%m.%d"
# default_tag_value = "none"
index_name = "telegraf-%Y.%m.%d" # required.
## Optional Index Config
## Set to true if Telegraf should use the "create" OpType while indexing
# use_optype_create = false
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Template Config
## Set to true if you want telegraf to manage its index template.
## If enabled it will create a recommended index template for telegraf indexes
manage_template = true
## The template name used for telegraf indexes
template_name = "telegraf"
## Set to true if you want telegraf to overwrite an existing template
overwrite_template = false
## If set to true a unique ID hash will be sent as sha256(concat(timestamp,measurement,series-hash)) string
## it will enable data resend and update metric points avoiding duplicated metrics with different id's
force_document_id = false
## Specifies the handling of NaN and Inf values.
## This option can have the following values:
## none -- do not modify field-values (default); will produce an error if NaNs or infs are encountered
## drop -- drop fields containing NaNs or infs
## replace -- replace with the value in "float_replacement_value" (default: 0.0)
## NaNs and inf will be replaced with the given number, -inf with the negative of that number
# float_handling = "none"
# float_replacement_value = 0.0
## Pipeline Config
## To use a ingest pipeline, set this to the name of the pipeline you want to use.
# use_pipeline = "my_pipeline"
## Additionally, you can specify a tag name using the notation {{tag_name}}
## which will be used as part of the pipeline name. If the tag does not exist,
## the default pipeline will be used as the pipeline. If no default pipeline is set,
## no pipeline is used for the metric.
# use_pipeline = "{{es_pipeline}}"
# default_pipeline = "my_pipeline"
#
# Custom HTTP headers
# To pass custom HTTP headers please define it in a given below section
# [outputs.elasticsearch.headers]
# "X-Custom-Header" = "custom-value"
## Template Index Settings
## Overrides the template settings.index section with any provided options.
## Defaults provided here in the config
# template_index_settings = {
# refresh_interval = "10s",
# mapping.total_fields.limit = 5000,
# auto_expand_replicas = "0-1",
# codec = "best_compression"
# }