1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

150
docs/AGGREGATORS.md Normal file
View file

@ -0,0 +1,150 @@
# Aggregator Plugins
This section is for developers who want to create a new aggregator plugin.
## Aggregator Plugin Guidelines
* A aggregator must conform to the [telegraf.Aggregator][] interface.
* Aggregators should call `aggregators.Add` in their `init` function to
register themselves. See below for a quick example.
* To be available within Telegraf itself, plugins must register themselves
using a file in `github.com/influxdata/telegraf/plugins/aggregators/all`
named according to the plugin name. Make sure you also add build-tags to
conditionally build the plugin.
* Each plugin requires a file called `sample.conf` containing the sample
configuration for the plugin in TOML format. Please consult the
[Sample Config][] page for the latest style guidelines.
* Each plugin `README.md` file should include the `sample.conf` file in a
section describing the configuration by specifying a `toml` section in the
form `toml @sample.conf`. The specified file(s) are then injected
automatically into the Readme.
* The Aggregator plugin will need to keep caches of metrics that have passed
through it. This should be done using the builtin `HashID()` function of
each metric.
* When the `Reset()` function is called, all caches should be cleared.
* Follow the recommended [Code Style][].
[telegraf.Aggregator]: https://godoc.org/github.com/influxdata/telegraf#Aggregator
[Sample Config]: /docs/developers/SAMPLE_CONFIG.md
[Code Style]: /docs/developers/CODE_STYLE.md
### Aggregator Plugin Example
### Registration
Registration of the plugin on `plugins/aggregators/all/min.go`:
```go
//go:build !custom || aggregators || aggregators.min
package all
import _ "github.com/influxdata/telegraf/plugins/aggregators/min" // register plugin
```
The _build-tags_ in the first line allow to selectively include/exclude your
plugin when customizing Telegraf.
### Plugin
Content of your plugin file e.g. `min.go`
```go
//go:generate ../../../tools/readme_config_includer/generator
package min
// min.go
import (
_ "embed"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/aggregators"
)
//go:embed sample.conf
var sampleConfig string
type Min struct {
// caches for metric fields, names, and tags
fieldCache map[uint64]map[string]float64
nameCache map[uint64]string
tagCache map[uint64]map[string]string
}
func NewMin() telegraf.Aggregator {
m := &Min{}
m.Reset()
return m
}
func (*Min) SampleConfig() string {
return sampleConfig
}
func (m *Min) Init() error {
return nil
}
func (m *Min) Add(in telegraf.Metric) {
id := in.HashID()
if _, ok := m.nameCache[id]; !ok {
// hit an uncached metric, create caches for first time:
m.nameCache[id] = in.Name()
m.tagCache[id] = in.Tags()
m.fieldCache[id] = make(map[string]float64)
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
m.fieldCache[id][k] = fv
}
}
} else {
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
if _, ok := m.fieldCache[id][k]; !ok {
// hit an uncached field of a cached metric
m.fieldCache[id][k] = fv
continue
}
if fv < m.fieldCache[id][k] {
// set new minimum
m.fieldCache[id][k] = fv
}
}
}
}
}
func (m *Min) Push(acc telegraf.Accumulator) {
for id, _ := range m.nameCache {
fields := map[string]interface{}{}
for k, v := range m.fieldCache[id] {
fields[k+"_min"] = v
}
acc.AddFields(m.nameCache[id], fields, m.tagCache[id])
}
}
func (m *Min) Reset() {
m.fieldCache = make(map[uint64]map[string]float64)
m.nameCache = make(map[uint64]string)
m.tagCache = make(map[uint64]map[string]string)
}
func convert(in interface{}) (float64, bool) {
switch v := in.(type) {
case float64:
return v, true
case int64:
return float64(v), true
default:
return 0, false
}
}
func init() {
aggregators.Add("min", func() telegraf.Aggregator {
return NewMin()
})
}
```

View file

@ -0,0 +1,93 @@
# Aggregator & Processor Plugins
Telegraf has the concept of aggregator and processor plugins, which sit between
inputs and outputs. These plugins allow a user to do additional processing or
aggregation to collected metrics.
```text
┌───────────┐
│ │
│ CPU │───┐
│ │ │
└───────────┘ │
┌───────────┐ │ ┌───────────┐
│ │ │ │ │
│ Memory │───┤ ┌──▶│ InfluxDB │
│ │ │ │ │ │
└───────────┘ │ ┌─────────────┐ ┌─────────────┐ │ └───────────┘
│ │ │ │Aggregators │ │
┌───────────┐ │ │Processors │ │ - mean │ │ ┌───────────┐
│ │ │ │ - transform │ │ - quantiles │ │ │ │
│ MySQL │───┼───▶│ - decorate │────▶│ - min/max │───┼──▶│ File │
│ │ │ │ - filter │ │ - count │ │ │ │
└───────────┘ │ │ │ │ │ │ └───────────┘
│ └─────────────┘ └─────────────┘ │
┌───────────┐ │ │ ┌───────────┐
│ │ │ │ │ │
│ SNMP │───┤ └──▶│ Kafka │
│ │ │ │ │
└───────────┘ │ └───────────┘
┌───────────┐ │
│ │ │
│ Docker │───┘
│ │
└───────────┘
```
## Ordering
Processors are run first, then aggregators, then processors a second time.
Allowing processors to run again after aggregators gives users the opportunity
to run a processor on any aggregated metrics. This behavior can be a bit
surprising to new users and may cause weird behavior in metrics. For example,
if the user scales data, it could get scaled twice!
To disable this behavior set the `skip_processors_after_aggregators` agent
configuration setting to true. Another option is to use metric filtering as
described below.
## Metric Filtering
Use [metric filtering][] to control which metrics are passed through a processor
or aggregator. If a metric is filtered out the metric bypasses the plugin and
is passed downstream to the next plugin.
[metric filtering]: CONFIGURATION.md#measurement-filtering
## Processor
Processor plugins process metrics as they pass through and immediately emit
results based on the values they process. For example, this could be printing
all metrics or adding a tag to all metrics that pass through.
See the [processors][] for a full list of processor plugins available.
[processors]: https://github.com/influxdata/telegraf/tree/master/plugins/processors
## Aggregator
Aggregator plugins, on the other hand, are a bit more complicated. Aggregators
are typically for emitting new _aggregate_ metrics, such as a running mean,
minimum, maximum, or standard deviation. For this reason, all _aggregator_
plugins are configured with a `period`. The `period` is the size of the window
of metrics that each _aggregate_ represents. In other words, the emitted
_aggregate_ metric will be the aggregated value of the past `period` seconds.
Since many users will only care about their aggregates and not every single
metric gathered, there is also a `drop_original` argument, which tells Telegraf
to only emit the aggregates and not the original metrics.
Since aggregates are created for each measurement, field, and unique tag
combination the plugin receives, you can make use of `taginclude` to group
aggregates by specific tags only.
See the [aggregators][] for a full list of aggregator plugins available.
**Note:** Aggregator plugins only aggregate metrics within their periods
(i.e. `now() - period`). Data with a timestamp earlier than `now() - period`
cannot be included.
[aggregators]: https://github.com/influxdata/telegraf/tree/master/plugins/aggregators

26
docs/APPARMOR.md Normal file
View file

@ -0,0 +1,26 @@
# AppArmor
When running Telegraf under AppArmor users may see denial messages depending on
the Telegraf plugins used and the AppArmor profile applied. Telegraf does not
have control over the AppArmor profiles used. If users wish to address denials,
then they must understand the collections made by their choice of Telegraf
plugins, the denial messages, and the impact of changes to their AppArmor
profiles.
## Example Denial
For example, users might see denial messages such as:
```s
type=AVC msg=audit(1588901740.036:2457789): apparmor="DENIED" operation="ptrace" profile="docker-default" pid=9030 comm="telegraf" requested_mask="read" denied_mask="read" peer="unconfined"
```
In this case, Telegraf will also need the ability to ptrace(read). User's will
first need to analyze the denial message for the operation and requested mask.
Then consider if the required changes make sense. There may be additional
denials even after initial changes.
For more details around AppArmor settings and configuration, users can check out
the `man 5 apparmor.d` man page on their system or the [AppArmor wiki][wiki].
[wiki]: https://gitlab.com/apparmor/apparmor/-/wikis/home

View file

@ -0,0 +1,57 @@
# Telegraf Commands & Flags
The following page describes some of the commands and flags available via the
Telegraf command line interface.
## Usage
General usage of Telegraf, requires passing in at least one config file with
the plugins the user wishes to use:
```bash
telegraf --config config.toml
```
## Help
To get the full list of subcommands and flags run:
```bash
telegraf help
```
Here are some commonly used flags that users should be aware of:
* `--config-directory`: Read all config files from a directory
* `--debug`: Enable additional debug logging
* `--once`: Run one collection and flush interval then exit
* `--test`: Run only inputs, output to stdout, and exit
Check out the full help out for more available flags and options.
## Version
While telegraf will print out the version when running, if a user is uncertain
what version their binary is, run the version subcommand:
```bash
telegraf version
```
## Config
The config subcommand allows users to print out a sample configuration to
stdout. This subcommand can very quickly print out the default values for all
or any of the plugins available in Telegraf.
For example to print the example config for all plugins run:
```bash
telegraf config > telegraf.conf
```
If a user only wanted certain inputs or outputs, then the filters can be used:
```bash
telegraf config --input-filter cpu --output-filter influxdb
```

918
docs/CONFIGURATION.md Normal file
View file

@ -0,0 +1,918 @@
<!-- markdownlint-disable MD024 -->
# Configuration
Telegraf's configuration file is written using [TOML][] and is composed of
three sections: [global tags][], [agent][] settings, and [plugins][].
## Generating a Configuration File
A default config file can be generated by telegraf:
```sh
telegraf config > telegraf.conf
```
To generate a file with specific inputs and outputs, you can use the
--input-filter and --output-filter flags:
```sh
telegraf config --input-filter cpu:mem:net:swap --output-filter influxdb:kafka
```
[View the full list][flags] of Telegraf commands and flags or by running
`telegraf --help`.
### Windows PowerShell v5 Encoding
In PowerShell 5, the default encoding is UTF-16LE and not UTF-8. Telegraf
expects a valid UTF-8 file. This is not an issue with PowerShell 6 or newer,
as well as the Command Prompt or with using the Git Bash shell.
As such, users will need to specify the output encoding when generating a full
configuration file:
```sh
telegraf.exe config | Out-File -Encoding utf8 telegraf.conf
```
This will generate a UTF-8 encoded file with a BOM. However, Telegraf can
handle the leading BOM.
## Configuration Loading
The location of the configuration file can be set via the `--config` command
line flag.
When the `--config-directory` command line flag is used files ending with
`.conf` in the specified directory will also be included in the Telegraf
configuration.
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
configuration files.
## Environment Variables
Environment variables can be used anywhere in the config file, simply surround
them with `${}`. Replacement occurs before file parsing. For strings
the variable must be within quotes, e.g., `"${STR_VAR}"`, for numbers and booleans
they should be unquoted, e.g., `${INT_VAR}`, `${BOOL_VAR}`.
Users need to keep in mind that when using double quotes the user needs to
escape any backslashes (e.g. `"C:\\Program Files"`) or other special characters.
If using an environment variable with a single backslash, then enclose the
variable in single quotes which signifies a string literal (e.g.
`'C:\Program Files'`).
In addition to this, Telegraf also supports Shell parameter expansion for
environment variables which allows syntax such as:
- `${VARIABLE:-default}` evaluates to default if VARIABLE is unset or empty in
the environment.
- `${VARIABLE-default}` evaluates to default only if VARIABLE is unset in the
environment. Similarly, the following syntax allows you
to specify mandatory variables:
- `${VARIABLE:?err}` exits with an error message containing err if VARIABLE is
unset or empty in the environment.
- `${VARIABLE?err}` exits with an error message containing err if VARIABLE is
unset in the environment.
When using the `.deb` or `.rpm` packages, you can define environment variables
in the `/etc/default/telegraf` file.
**Example**:
`/etc/default/telegraf`:
For InfluxDB 1.x:
```shell
USER="alice"
INFLUX_URL="http://localhost:8086"
INFLUX_SKIP_DATABASE_CREATION="true"
INFLUX_PASSWORD="monkey123"
```
For InfluxDB OSS 2:
```shell
INFLUX_HOST="http://localhost:8086" # used to be 9999
INFLUX_TOKEN="replace_with_your_token"
INFLUX_ORG="your_username"
INFLUX_BUCKET="replace_with_your_bucket_name"
```
For InfluxDB Cloud 2:
```shell
# For AWS West (Oregon)
INFLUX_HOST="https://us-west-2-1.aws.cloud2.influxdata.com"
# Other Cloud URLs at https://v2.docs.influxdata.com/v2.0/reference/urls/#influxdb-cloud-urls
INFLUX_TOKEN=”replace_with_your_token”
INFLUX_ORG="yourname@yourcompany.com"
INFLUX_BUCKET="replace_with_your_bucket_name"
```
`/etc/telegraf.conf`:
```toml
[global_tags]
user = "${USER}"
[[inputs.mem]]
# For InfluxDB 1.x:
[[outputs.influxdb]]
urls = ["${INFLUX_URL}"]
skip_database_creation = ${INFLUX_SKIP_DATABASE_CREATION}
password = "${INFLUX_PASSWORD}"
# For InfluxDB OSS 2:
[[outputs.influxdb_v2]]
urls = ["${INFLUX_HOST}"]
token = "${INFLUX_TOKEN}"
organization = "${INFLUX_ORG}"
bucket = "${INFLUX_BUCKET}"
# For InfluxDB Cloud 2:
[[outputs.influxdb_v2]]
urls = ["${INFLUX_HOST}"]
token = "${INFLUX_TOKEN}"
organization = "${INFLUX_ORG}"
bucket = "${INFLUX_BUCKET}"
```
The above files will produce the following effective configuration file to be
parsed:
```toml
[global_tags]
user = "alice"
[[inputs.mem]]
# For InfluxDB 1.x:
[[outputs.influxdb]]
urls = "http://localhost:8086"
skip_database_creation = true
password = "monkey123"
# For InfluxDB OSS 2:
[[outputs.influxdb_v2]]
urls = ["http://127.0.0.1:8086"] # double check the port. could be 9999 if using OSS Beta
token = "replace_with_your_token"
organization = "your_username"
bucket = "replace_with_your_bucket_name"
# For InfluxDB Cloud 2:
[[outputs.influxdb_v2]]
# For AWS West (Oregon)
INFLUX_HOST="https://us-west-2-1.aws.cloud2.influxdata.com"
# Other Cloud URLs at https://v2.docs.influxdata.com/v2.0/reference/urls/#influxdb-cloud-urls
token = "replace_with_your_token"
organization = "yourname@yourcompany.com"
bucket = "replace_with_your_bucket_name"
```
## Secret-store secrets
Additional or instead of environment variables, you can use secret-stores
to fill in credentials or similar. To do so, you need to configure one or more
secret-store plugin(s) and then reference the secret in your plugin
configurations. A reference to a secret is specified in form
`@{<secret store id>:<secret name>}`, where the `secret store id` is the unique
ID you defined for your secret-store and `secret name` is the name of the secret
to use.
**NOTE:** Both, the `secret store id` as well as the `secret name` can only
consist of letters (both upper- and lowercase), numbers and underscores.
**Example**:
This example illustrates the use of secret-store(s) in plugins
```toml
[global_tags]
user = "alice"
[[secretstores.os]]
id = "local_secrets"
[[secretstores.jose]]
id = "cloud_secrets"
path = "/etc/telegraf/secrets"
# Optional reference to another secret store to unlock this one.
password = "@{local_secrets:cloud_store_passwd}"
[[inputs.http]]
urls = ["http://server.company.org/metrics"]
username = "@{local_secrets:company_server_http_metric_user}"
password = "@{local_secrets:company_server_http_metric_pass}"
[[outputs.influxdb_v2]]
urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
token = "@{cloud_secrets:influxdb_token}"
organization = "yourname@yourcompany.com"
bucket = "replace_with_your_bucket_name"
```
### Notes
When using plugins supporting secrets, Telegraf locks the memory pages
containing the secrets. Therefore, the locked memory limit has to be set to a
suitable value. Telegraf will check the limit and the number of used secrets at
startup and will warn if your limit is too low. In this case, please increase
the limit via `ulimit -l`.
If you are running Telegraf in an jail you might need to allow locked pages in
that jail by setting `allow.mlock = 1;` in your config.
## Intervals
Intervals are durations of time and can be specified for supporting settings by
combining an integer value and time unit as a string value. Valid time units are
`ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`.
```toml
[agent]
interval = "10s"
```
## Global Tags
Global tags can be specified in the `[global_tags]` table in key="value"
format. All metrics that are gathered will be tagged with the tags specified.
Global tags are overridden by tags set by plugins.
```toml
[global_tags]
dc = "us-east-1"
```
## Agent
The agent table configures Telegraf and the defaults used across all plugins.
- **interval**: Default data collection [interval][] for all inputs.
- **round_interval**: Rounds collection interval to [interval][]
ie, if interval="10s" then always collect on :00, :10, :20, etc.
- **metric_batch_size**:
Telegraf will send metrics to outputs in batches of at most
metric_batch_size metrics.
This controls the size of writes that Telegraf sends to output plugins.
- **metric_buffer_limit**:
Maximum number of unwritten metrics per output. Increasing this value
allows for longer periods of output downtime without dropping metrics at the
cost of higher maximum memory usage. Oldest metrics are overwritten in favor
of new ones when the buffer fills up.
- **collection_jitter**:
Collection jitter is used to jitter the collection by a random [interval][].
Each plugin will sleep for a random time within jitter before collecting.
This can be used to avoid many plugins querying things like sysfs at the
same time, which can have a measurable effect on the system.
- **collection_offset**:
Collection offset is used to shift the collection by the given [interval][].
This can be be used to avoid many plugins querying constraint devices
at the same time by manually scheduling them in time.
- **flush_interval**:
Default flushing [interval][] for all outputs. Maximum flush_interval will be
flush_interval + flush_jitter.
- **flush_jitter**:
Default flush jitter for all outputs. This jitters the flush [interval][]
by a random amount. This is primarily to avoid large write spikes for users
running a large number of telegraf instances. ie, a jitter of 5s and interval
10s means flushes will happen every 10-15s.
- **precision**:
Collected metrics are rounded to the precision specified as an [interval][].
Precision will NOT be used for service inputs. It is up to each individual
service input to set the timestamp at the appropriate precision.
- **debug**:
Log at debug level.
- **quiet**:
Log only error level messages.
- **logformat**:
Log format controls the way messages are logged and can be one of "text",
"structured" or, on Windows, "eventlog". The output file (if any) is
determined by the `logfile` setting.
- **structured_log_message_key**:
Message key for structured logs, to override the default of "msg".
Ignored if `logformat` is not "structured".
- **logfile**:
Name of the file to be logged to or stderr if unset or empty. This
setting is ignored for the "eventlog" format.
- **logfile_rotation_interval**:
The logfile will be rotated after the time interval specified. When set to
0 no time based rotation is performed.
- **logfile_rotation_max_size**:
The logfile will be rotated when it becomes larger than the specified size.
When set to 0 no size based rotation is performed.
- **logfile_rotation_max_archives**:
Maximum number of rotated archives to keep, any older logs are deleted. If
set to -1, no archives are removed.
- **log_with_timezone**:
Pick a timezone to use when logging or type 'local' for local time. Example: 'America/Chicago'.
[See this page for options/formats.](https://socketloop.com/tutorials/golang-display-list-of-timezones-with-gmt)
- **hostname**:
Override default hostname, if empty use os.Hostname()
- **omit_hostname**:
If set to true, do no set the "host" tag in the telegraf agent.
- **snmp_translator**:
Method of translating SNMP objects. Can be "netsnmp" (deprecated) which
translates by calling external programs `snmptranslate` and `snmptable`,
or "gosmi" which translates using the built-in gosmi library.
- **statefile**:
Name of the file to load the states of plugins from and store the states to.
If uncommented and not empty, this file will be used to save the state of
stateful plugins on termination of Telegraf. If the file exists on start,
the state in the file will be restored for the plugins.
- **always_include_local_tags**:
Ensure tags explicitly defined in a plugin will *always* pass tag-filtering
via `taginclude` or `tagexclude`. This removes the need to specify local tags
twice.
- **always_include_global_tags**:
Ensure tags explicitly defined in the `global_tags` section will *always* pass
tag-filtering via `taginclude` or `tagexclude`. This removes the need to
specify those tags twice.
- **skip_processors_after_aggregators**:
By default, processors are run a second time after aggregators. Changing
this setting to true will skip the second run of processors.
- **buffer_strategy**:
The type of buffer to use for telegraf output plugins. Supported modes are
`memory`, the default and original buffer type, and `disk`, an experimental
disk-backed buffer which will serialize all metrics to disk as needed to
improve data durability and reduce the chance for data loss. This is only
supported at the agent level.
- **buffer_directory**:
The directory to use when in `disk` buffer mode. Each output plugin will make
another subdirectory in this directory with the output plugin's ID.
## Plugins
Telegraf plugins are divided into 4 types: [inputs][], [outputs][],
[processors][], and [aggregators][].
Unlike the `global_tags` and `agent` tables, any plugin can be defined
multiple times and each instance will run independently. This allows you to
have plugins defined with differing configurations as needed within a single
Telegraf process.
Each plugin has a unique set of configuration options, reference the
sample configuration for details. Additionally, several options are available
on any plugin depending on its type.
### Input Plugins
Input plugins gather and create metrics. They support both polling and event
driven operation.
Parameters that can be used with any input plugin:
- **alias**: Name an instance of a plugin.
- **interval**:
Overrides the `interval` setting of the [agent][Agent] for the plugin. How
often to gather this metric. Normal plugins use a single global interval, but
if one particular input should be run less or more often, you can configure
that here.
- **precision**:
Overrides the `precision` setting of the [agent][Agent] for the plugin.
Collected metrics are rounded to the precision specified as an [interval][].
When this value is set on a service input, multiple events occurring at the
same timestamp may be merged by the output database.
- **time_source**:
Specifies the source of the timestamp on metrics. Possible values are:
- `metric` will not alter the metric (default)
- `collection_start` sets the timestamp to when collection started
- `collection_end` set the timestamp to when collection finished
`time_source` will NOT be used for service inputs. It is up to each individual
service input to set the timestamp.
- **collection_jitter**:
Overrides the `collection_jitter` setting of the [agent][Agent] for the
plugin. Collection jitter is used to jitter the collection by a random
[interval][]. The value must be non-zero to override the agent setting.
- **collection_offset**:
Overrides the `collection_offset` setting of the [agent][Agent] for the
plugin. Collection offset is used to shift the collection by the given
[interval][]. The value must be non-zero to override the agent setting.
- **name_override**: Override the base name of the measurement. (Default is
the name of the input).
- **name_prefix**: Specifies a prefix to attach to the measurement name.
- **name_suffix**: Specifies a suffix to attach to the measurement name.
- **tags**: A map of tags to apply to a specific input's measurements.
- **log_level**: Override the log-level for this plugin. Possible values are
`error`, `warn`, `info`, `debug` and `trace`.
The [metric filtering][] parameters can be used to limit what metrics are
emitted from the input plugin.
#### Examples
Use the name_suffix parameter to emit measurements with the name `cpu_total`:
```toml
[[inputs.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
```
Use the name_override parameter to emit measurements with the name `foobar`:
```toml
[[inputs.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
```
Emit measurements with two additional tags: `tag1=foo` and `tag2=bar`
> **NOTE**: With TOML, order matters. Parameters belong to the last defined
> table header, place `[inputs.cpu.tags]` table at the *end* of the plugin
> definition.
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
Alternatively, when using the inline table syntax, the tags do not need
to go at the end:
```toml
[[inputs.cpu]]
tags = {tag1 = "foo", tag2 = "bar"}
percpu = false
totalcpu = true
```
Utilize `name_override`, `name_prefix`, or `name_suffix` config options to
avoid measurement collisions when defining multiple plugins:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
percpu = true
totalcpu = false
name_override = "percpu_usage"
fieldexclude = ["cpu_time*"]
```
### Output Plugins
Output plugins write metrics to a location. Outputs commonly write to
databases, network services, and messaging systems.
Parameters that can be used with any output plugin:
- **alias**: Name an instance of a plugin.
- **flush_interval**: The maximum time between flushes. Use this setting to
override the agent `flush_interval` on a per plugin basis.
- **flush_jitter**: The amount of time to jitter the flush interval. Use this
setting to override the agent `flush_jitter` on a per plugin basis. The value
must be non-zero to override the agent setting.
- **metric_batch_size**: The maximum number of metrics to send at once. Use
this setting to override the agent `metric_batch_size` on a per plugin basis.
- **metric_buffer_limit**: The maximum number of unsent metrics to buffer.
Use this setting to override the agent `metric_buffer_limit` on a per plugin
basis.
- **name_override**: Override the original name of the measurement.
- **name_prefix**: Specifies a prefix to attach to the measurement name.
- **name_suffix**: Specifies a suffix to attach to the measurement name.
- **log_level**: Override the log-level for this plugin. Possible values are
`error`, `warn`, `info` and `debug`.
The [metric filtering][] parameters can be used to limit what metrics are
emitted from the output plugin.
#### Examples
Override flush parameters for a single output:
```toml
[agent]
flush_interval = "10s"
flush_jitter = "5s"
metric_batch_size = 1000
[[outputs.influxdb]]
urls = [ "http://example.org:8086" ]
database = "telegraf"
[[outputs.file]]
files = [ "stdout" ]
flush_interval = "1s"
flush_jitter = "1s"
metric_batch_size = 10
```
### Processor Plugins
Processor plugins perform processing tasks on metrics and are commonly used to
rename or apply transformations to metrics. Processors are applied after the
input plugins and before any aggregator plugins.
Parameters that can be used with any processor plugin:
- **alias**: Name an instance of a plugin.
- **order**: The order in which the processor(s) are executed. starting with 1.
If this is not specified then processor execution order will be the order in
the config. Processors without "order" will take precedence over those
with a defined order.
- **log_level**: Override the log-level for this plugin. Possible values are
`error`, `warn`, `info` and `debug`.
The [metric filtering][] parameters can be used to limit what metrics are
handled by the processor. Excluded metrics are passed downstream to the next
processor.
#### Examples
If the order processors are applied matters you must set order on all involved
processors:
```toml
[[processors.rename]]
order = 1
[[processors.rename.replace]]
tag = "path"
dest = "resource"
[[processors.strings]]
order = 2
[[processors.strings.trim_prefix]]
tag = "resource"
prefix = "/api/"
```
### Aggregator Plugins
Aggregator plugins produce new metrics after examining metrics over a time
period, as the name suggests they are commonly used to produce new aggregates
such as mean/max/min metrics. Aggregators operate on metrics after any
processors have been applied.
Parameters that can be used with any aggregator plugin:
- **alias**: Name an instance of a plugin.
- **period**: The period on which to flush & clear each aggregator. All
metrics that are sent with timestamps outside of this period will be ignored
by the aggregator.
The default period is set to 30 seconds.
- **delay**: The delay before each aggregator is flushed. This is to control
how long for aggregators to wait before receiving metrics from input
plugins, in the case that aggregators are flushing and inputs are gathering
on the same interval.
The default delay is set to 100 ms.
- **grace**: The duration when the metrics will still be aggregated
by the plugin, even though they're outside of the aggregation period. This
is needed in a situation when the agent is expected to receive late metrics
and it's acceptable to roll them up into next aggregation period.
The default grace duration is set to 0 s.
- **drop_original**: If true, the original metric will be dropped by the
aggregator and will not get sent to the output plugins.
- **name_override**: Override the base name of the measurement. (Default is
the name of the input).
- **name_prefix**: Specifies a prefix to attach to the measurement name.
- **name_suffix**: Specifies a suffix to attach to the measurement name.
- **tags**: A map of tags to apply to the measurement - behavior varies based on aggregator.
- **log_level**: Override the log-level for this plugin. Possible values are
`error`, `warn`, `info` and `debug`.
The [metric filtering][] parameters can be used to limit what metrics are
handled by the aggregator. Excluded metrics are passed downstream to the next
aggregator.
#### Examples
Collect and emit the min/max of the system load1 metric every 30s, dropping
the originals.
```toml
[[inputs.system]]
fieldinclude = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
[[outputs.file]]
files = ["stdout"]
```
Collect and emit the min/max of the swap metrics every 30s, dropping the
originals. The aggregator will not be applied to the system load metrics due
to the `namepass` parameter.
```toml
[[inputs.swap]]
[[inputs.system]]
fieldinclude = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
namepass = ["swap"] # only "pass" swap metrics through the aggregator.
[[outputs.file]]
files = ["stdout"]
```
## Metric Filtering
Metric filtering can be configured per plugin on any input, output, processor,
and aggregator plugin. Filters fall under two categories: Selectors and
Modifiers.
### Selectors
Selector filters include or exclude entire metrics. When a metric is excluded
from a Input or an Output plugin, the metric is dropped. If a metric is
excluded from a Processor or Aggregator plugin, it is skips the plugin and is
sent onwards to the next stage of processing.
- **namepass**:
An array of [glob pattern][] strings. Only metrics whose measurement name
matches a pattern in this list are emitted. Additionally, custom list of
separators can be specified using `namepass_separator`. These separators
are excluded from wildcard glob pattern matching.
- **namedrop**:
The inverse of `namepass`. If a match is found the metric is discarded. This
is tested on metrics after they have passed the `namepass` test. Additionally,
custom list of separators can be specified using `namedrop_separator`. These
separators are excluded from wildcard glob pattern matching.
- **tagpass**:
A table mapping tag keys to arrays of [glob pattern][] strings. Only metrics
that contain a tag key in the table and a tag value matching one of its
patterns is emitted. This can either use the explicit table syntax (e.g.
a subsection using a `[...]` header) or inline table syntax (e.g like
a JSON table with `{...}`). Please see the below notes on specifying the table.
- **tagdrop**:
The inverse of `tagpass`. If a match is found the metric is discarded. This
is tested on metrics after they have passed the `tagpass` test.
> NOTE: Due to the way TOML is parsed, when using the explicit table
> syntax (with `[...]`) for `tagpass` and `tagdrop` parameters, they
> must be defined at the **end** of the plugin definition, otherwise subsequent
> plugin config options will be interpreted as part of the tagpass/tagdrop
> tables.
> NOTE: When using the inline table syntax (e.g. `{...}`) the table must exist
> in the main plugin definition and not in any sub-table (e.g.
> `[[inputs.win_perf_counters.object]]`).
- **metricpass**:
A ["Common Expression Language"][CEL] (CEL) expression with boolean result where
`true` will allow the metric to pass, otherwise the metric is discarded. This
filter expression is more general compared to e.g. `namepass` and also allows
for time-based filtering. An introduction to the CEL language can be found
[here][CEL intro]. Further details, such as available functions and expressions,
are provided in the [language definition][CEL lang] as well as in the
[extension documentation][CEL ext].
**NOTE:** Expressions that may be valid and compile, but fail at runtime will
result in the expression reporting as `true`. The metrics will pass through
as a result. An example is when reading a non-existing field. If this happens,
the evaluation is aborted, an error is logged, and the expression is reported as
`true`, so the metric passes.
> NOTE: As CEL is an *interpreted* languguage, this type of filtering is much
> slower compared to `namepass`/`namedrop` and friends. So consider to use the
> more restricted filter options where possible in case of high-throughput
> scenarios.
[CEL]:https://github.com/google/cel-go/tree/master
[CEL intro]: https://codelabs.developers.google.com/codelabs/cel-go
[CEL lang]: https://github.com/google/cel-spec/blob/master/doc/langdef.md
[CEL ext]: https://github.com/google/cel-go/tree/master/ext#readme
### Modifiers
Modifier filters remove tags and fields from a metric. If all fields are
removed the metric is removed and as result not passed through to the following
processors or any output plugin. Tags and fields are modified before a metric is
passed to a processor, aggregator, or output plugin. When used with an input
plugin the filter applies after the input runs.
- **fieldinclude**:
An array of [glob pattern][] strings. Only fields whose field key matches a
pattern in this list are emitted.
- **fieldexclude**:
The inverse of `fieldinclude`. Fields with a field key matching one of the
patterns will be discarded from the metric. This is tested on metrics after
they have passed the `fieldinclude` test.
- **taginclude**:
An array of [glob pattern][] strings. Only tags with a tag key matching one of
the patterns are emitted. In contrast to `tagpass`, which will pass an entire
metric based on its tag, `taginclude` removes all non matching tags from the
metric. Any tag can be filtered including global tags and the agent `host`
tag.
- **tagexclude**:
The inverse of `taginclude`. Tags with a tag key matching one of the patterns
will be discarded from the metric. Any tag can be filtered including global
tags and the agent `host` tag.
### Filtering Examples
#### Using tagpass and tagdrop
```toml
[[inputs.cpu]]
percpu = true
totalcpu = false
fieldexclude = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[inputs.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[inputs.disk]]
[inputs.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
[[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]]
ObjectName = "Network Interface"
Instances = ["*"]
Counters = [
"Bytes Received/sec",
"Bytes Sent/sec"
]
Measurement = "win_net"
# Do not send metrics where the Windows interface name (instance) begins with
# 'isatap' or 'Local'
[inputs.win_perf_counters.tagdrop]
instance = ["isatap*", "Local*"]
```
#### Using fieldinclude and fieldexclude
```toml
# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
fieldexclude = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
fieldinclude = ["inodes*"]
```
#### Using namepass and namedrop
```toml
# Drop all metrics about containers for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namedrop = ["container_*"]
# Only store rest client related metrics for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namepass = ["rest_client_*"]
```
#### Using namepass and namedrop with separators
```toml
# Pass all metrics of type 'A.C.B' and drop all others like 'A.C.D.B'
[[inputs.socket_listener]]
data_format = "graphite"
templates = ["measurement*"]
namepass = ["A.*.B"]
namepass_separator = "."
# Drop all metrics of type 'A.C.B' and pass all others like 'A.C.D.B'
[[inputs.socket_listener]]
data_format = "graphite"
templates = ["measurement*"]
namedrop = ["A.*.B"]
namedrop_separator = "."
```
#### Using taginclude and tagexclude
```toml
# Only include the "cpu" tag in the measurements for the cpu plugin.
[[inputs.cpu]]
percpu = true
totalcpu = true
taginclude = ["cpu"]
# Exclude the "fstype" tag from the measurements for the disk plugin.
[[inputs.disk]]
tagexclude = ["fstype"]
```
#### Metrics can be routed to different outputs using the metric name and tags
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
# Drop all measurements that start with "aerospike"
namedrop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
# Only accept aerospike data:
namepass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```
#### Routing metrics to different outputs based on the input
Metrics are tagged with `influxdb_database` in the input, which is then used to
select the output. The tag is removed in the outputs before writing with `tagexclude`.
```toml
[[outputs.influxdb]]
urls = ["http://influxdb.example.com"]
database = "db_default"
[outputs.influxdb.tagdrop]
influxdb_database = ["*"]
[[outputs.influxdb]]
urls = ["http://influxdb.example.com"]
database = "db_other"
tagexclude = ["influxdb_database"]
[outputs.influxdb.tagpass]
influxdb_database = ["other"]
[[inputs.disk]]
[inputs.disk.tags]
influxdb_database = "other"
```
## Transport Layer Security (TLS)
Reference the detailed [TLS][] documentation.
[TOML]: https://github.com/toml-lang/toml#toml
[global tags]: #global-tags
[interval]: #intervals
[agent]: #agent
[plugins]: #plugins
[inputs]: #input-plugins
[outputs]: #output-plugins
[processors]: #processor-plugins
[aggregators]: #aggregator-plugins
[metric filtering]: #metric-filtering
[TLS]: /docs/TLS.md
[glob pattern]: https://github.com/gobwas/glob#syntax
[flags]: /docs/COMMANDS_AND_FLAGS.md

46
docs/CUSTOMIZATION.md Normal file
View file

@ -0,0 +1,46 @@
# Customization
You can build customized versions of Telegraf with a specific plugin set using
the [custom builder](/tools/custom_builder) tool or
[build-tags](https://pkg.go.dev/cmd/go#hdr-Build_constraints).
For build tags, the plugins can be selected either category-wise, i.e.
`inputs`, `outputs`,`processors`, `aggregators`, `parsers`, `secretstores`
and `serializers` or individually, e.g. `inputs.modbus` or `outputs.influxdb`.
Usually the build tags correspond to the plugin names used in the Telegraf
configuration. To be sure, check the files in the corresponding
`plugin/<category>/all` directory. Make sure to include all parsers you intend
to use.
__Note:__ You _always_ need to include the `custom` tag when customizing the
build as otherwise _all_ plugins will be selected regardless of other tags.
## Via make
When using the project's makefile, the build can be customized via the
`BUILDTAGS` environment variable containing a __comma-separated__ list of the
selected plugins (or categories) __and__ the `custom` tag.
For example
```shell
BUILDTAGS="custom,inputs,outputs.influxdb_v2,parsers.json" make
```
will build a customized Telegraf including _all_ `inputs`, the InfluxDB v2
`output` and the `json` parser.
## Via `go build`
If you wish to build Telegraf using native go tools, you can use the `go build`
command with the `-tags` option. Specify a __comma-separated__ list of the
selected plugins (or categories) __and__ the `custom` tag as argument.
For example
```shell
go build -tags "custom,inputs,outputs.influxdb_v2,parsers.json" ./cmd/telegraf
```
will build a customized Telegraf including _all_ `inputs`, the InfluxDB v2
`output` and the `json` parser.

View file

@ -0,0 +1,45 @@
# Input Data Formats
Telegraf contains many general purpose plugins that support parsing input data
using a configurable parser into [metrics][]. This allows, for example, the
`kafka_consumer` input plugin to process messages in any of InfluxDB Line
Protocol, JSON format, or Apache Avro format.
- [Avro](/plugins/parsers/avro)
- [Binary](/plugins/parsers/binary)
- [Collectd](/plugins/parsers/collectd)
- [CSV](/plugins/parsers/csv)
- [Dropwizard](/plugins/parsers/dropwizard)
- [Form URL Encoded](/plugins/parsers/form_urlencoded)
- [Graphite](/plugins/parsers/graphite)
- [Grok](/plugins/parsers/grok)
- [InfluxDB Line Protocol](/plugins/parsers/influx)
- [JSON](/plugins/parsers/json)
- [JSON v2](/plugins/parsers/json_v2)
- [Logfmt](/plugins/parsers/logfmt)
- [Nagios](/plugins/parsers/nagios)
- [OpenMetrics](/plugins/parsers/openmetrics)
- [OpenTSDB](/plugins/parsers/opentsdb)
- [Parquet](/plugins/parsers/parquet)
- [Prometheus](/plugins/parsers/prometheus)
- [PrometheusRemoteWrite](/plugins/parsers/prometheusremotewrite)
- [Value](/plugins/parsers/value), ie: 45 or "booyah"
- [Wavefront](/plugins/parsers/wavefront)
- [XPath](/plugins/parsers/xpath) (supports XML, JSON, MessagePack, Protocol Buffers)
Any input plugin containing the `data_format` option can use it to select the
desired parser:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume.
data_format = "json"
```
[metrics]: /docs/METRICS.md

View file

@ -0,0 +1,32 @@
# Output Data Formats
In addition to output specific data formats, Telegraf supports a set of
standard data formats that may be selected from when configuring many output
plugins.
1. [InfluxDB Line Protocol](/plugins/serializers/influx)
1. [Binary](/plugins/serializers/binary)
1. [Carbon2](/plugins/serializers/carbon2)
1. [CloudEvents](/plugins/serializers/cloudevents)
1. [CSV](/plugins/serializers/csv)
1. [Graphite](/plugins/serializers/graphite)
1. [JSON](/plugins/serializers/json)
1. [MessagePack](/plugins/serializers/msgpack)
1. [Prometheus](/plugins/serializers/prometheus)
1. [Prometheus Remote Write](/plugins/serializers/prometheusremotewrite)
1. [ServiceNow Metrics](/plugins/serializers/nowmetric)
1. [SplunkMetric](/plugins/serializers/splunkmetric)
1. [Template](/plugins/serializers/template)
1. [Wavefront](/plugins/serializers/wavefront)
You will be able to identify the plugins with support by the presence of a
`data_format` config option, for example, in the `file` output plugin:
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout"]
## Data format to output.
data_format = "influx"
```

64
docs/DOCKER.md Normal file
View file

@ -0,0 +1,64 @@
# Docker Images
Telegraf is available as an [Official image][] on DockerHub. Official images
are a curated set of Docker Images that also automatically get security updates
from Docker, follow a set of best practices, and are available via a shortcut
syntax which omits the organization.
InfluxData maintains Debian and Alpine based images across the last three
minor releases. To pull the latest Telegraf images:
```shell
# latest Debian-based image
docker pull telegraf
# latest Alpine-based image
docker pull telegraf:alpine
```
See the [Telegraf DockerHub][] page for complete details on available images,
versions, and tags.
[official image]: https://docs.docker.com/trusted-content/official-images/
[Telegraf DockerHub]: https://hub.docker.com/_/telegraf
## Nightly Images
[Nightly builds][] are available and are generated from the master branch each
day at around midnight UTC. The artifacts include both binary packages, RPM &
DEB packages, as well as nightly Docker images that are hosted on [quay.io][].
[Nightly builds]: /docs/NIGHTLIES.md
[quay.io]: https://quay.io/repository/influxdb/telegraf-nightly?tab=tags&tag=latest
## Dockerfiles
The [Dockerfiles][] for these images are available for users to use as well.
[Dockerfiles]: https://github.com/influxdata/influxdata-docker
## Lockable Memory
Telegraf does require the ability to use lockable memory when running by default. In some
deployments for Docker a container may not have enough lockable memory, which
results in the following warning:
```text
W! Insufficient lockable memory 64kb when 72kb is required. Please increase the limit for Telegraf in your Operating System!
```
or this error:
```text
panic: could not acquire lock on 0x7f7a8890f000, limit reached? [Err: cannot allocate memory]
```
Users have two options:
1. Increase the ulimit in the container. The user does this with the `ulimit -l`
command. To both see and set the value. For docker, there is a `--ulimit` flag
that could be used, like `--ulimit memlock=8192:8192` as well.
2. Add the `--unprotected` flag to the command arguments to not use locked
memory and instead store secrets in unprotected memory. This is less secure
as secrets could find their way into paged out memory and can be written to
disk unencrypted, therefore this is opt-in. For docker look at updating the
`CMD` used to include this flag.

100
docs/EXTERNAL_PLUGINS.md Normal file
View file

@ -0,0 +1,100 @@
# External Plugins
[External plugins](/EXTERNAL_PLUGINS.md) are external programs that are built
outside of Telegraf that can run through an `execd` plugin. These external
plugins allow for more flexibility compared to internal Telegraf plugins.
- External plugins can be written in any language (internal Telegraf plugins can
only be written in Go)
- External plugins can access to libraries not written in Go
- Utilize licensed software that is not available to the open source community
- Can include large dependencies that would otherwise bloat Telegraf
- You do not need to wait on the Telegraf team to publish the plugin and start
working with it.
- Using the [shim](/plugins/common/shim) you can easily convert plugins between
internal and external use
- Using 3rd-party libraries requiring CGO support
## External Plugin Guidelines
The guidelines of writing external plugins would follow those for our general
[input](/docs/INPUTS.md), [output](/docs/OUTPUTS.md),
[processor](/docs/PROCESSORS.md), and [aggregator](/docs/AGGREGATORS.md)
plugins. Please reference the documentation on how to create these plugins
written in Go.
_For listed [external plugins](/EXTERNAL_PLUGINS.md), the author of the external
plugin is also responsible for the maintenance and feature development of
external plugins. Expect to have users open plugin issues on its respective
GitHub repository._
### Execd Go Shim
For Go plugins, there is a [Execd Go Shim](/plugins/common/shim/) that will make
it trivial to extract an internal input, processor, or output plugin from the
main Telegraf repo out to a stand-alone repo. This shim allows anyone to build
and run it as a separate app using one of the `execd` plugins:
- [inputs.execd](/plugins/inputs/execd)
- [processors.execd](/plugins/processors/execd)
- [outputs.execd](/plugins/outputs/execd)
Follow the [Steps to externalize a plugin][] and
[Steps to build and run your plugin][] to properly with the Execd Go Shim.
[Steps to externalize a plugin]: /plugins/common/shim#steps-to-externalize-a-plugin
[Steps to build and run your plugin]: /plugins/common/shim#steps-to-build-and-run-your-plugin
## Step-by-Step guidelines
This is a guide to help you set up a plugin to use it with `execd`:
1. Write a Telegraf plugin. Depending on the plugin, follow the guidelines on
how to create the plugin itself using InfluxData's best practices:
- [Input Plugins](/docs/INPUTS.md)
- [Processor Plugins](/docs/PROCESSORS.md)
- [Aggregator Plugins](/docs/AGGREGATORS.md)
- [Output Plugins](/docs/OUTPUTS.md)
2. Move the project to an external repo, it is recommended to preserve the
path structure, but not strictly necessary. For example, if the plugin was
at `plugins/inputs/cpu`, it is recommended that it also be under
`plugins/inputs/cpu` in the new repo. For a further example of what this
might look like, take a look at [ssoroka/rand][] or
[danielnelson/telegraf-execd-openvpn][].
3. Copy [main.go](/plugins/common/shim/example/cmd/main.go) into the project
under the `cmd` folder. This will be the entrypoint to the plugin when run as
a stand-alone program and it will call the shim code for you to make that
happen. It is recommended to have only one plugin per repo, as the shim is
not designed to run multiple plugins at the same time.
4. Edit the main.go file to import the plugin. Within Telegraf this would have
been done in an all.go file, but here we do not split the two apart, and the
change just goes in the top of main.go. If you skip this step, the plugin
will do nothing.
> `_ "github.com/me/my-plugin-telegraf/plugins/inputs/cpu"`
5. Optionally add a [plugin.conf](./example/cmd/plugin.conf) for configuration
specific to the plugin. Note that this config file **must be separate from
the rest of the config for Telegraf, and must not be in a shared directory
where Telegraf is expecting to load all configs**. If Telegraf reads this
config file it will not know which plugin it relates to. Telegraf instead
uses an execd config block to look for this plugin.
6. Add usage and development instructions in the homepage of the repository
for running the plugin with its respective `execd` plugin. Please refer to
[openvpn install][] and [awsalarms install][] for examples. Include the
following steps:
1. How to download the release package for the platform or how to clone the
binary for the external plugin
1. The commands to build the binary
1. Location to edit the `telegraf.conf`
1. Configuration to run the external plugin with
[inputs.execd](/plugins/inputs/execd),
[processors.execd](/plugins/processors/execd), or
[outputs.execd](/plugins/outputs/execd)
7. Submit the plugin by opening a PR to add the external plugin to the
[/EXTERNAL_PLUGINS.md](/EXTERNAL_PLUGINS.md) list. Please include the
plugin name, link to the plugin repository and a short description of the
plugin.
[ssoroka/rand]: https://github.com/ssoroka/rand
[danielnelson/telegraf-execd-openvpn]: https://github.com/danielnelson/telegraf-execd-openvpn
[openvpn install]: https://github.com/danielnelson/telegraf-execd-openvpn#usage
[awsalarms install]: https://github.com/vipinvkmenon/awsalarms#installation

135
docs/FAQ.md Normal file
View file

@ -0,0 +1,135 @@
# Frequently Asked Questions
## When is the next release? When will my PR or fix get released?
Telegraf has four minor releases a year in March, June, September, and
December. In between each of those minor releases, there are 2-4 bug fix
releases that happen every 3 weeks.
This [Google Calendar][] is kept up to date for upcoming releases dates.
Additionally, users can look at the [GitHub milestones][] for the next minor
and bug fix release.
PRs that resolves issues are released in the next release. PRs that introduce
new features are held for the next minor release. Users can view what
[GitHub milestones][] a PR belongs to to determine the release it will go out
with.
[Google Calendar]: https://calendar.google.com/calendar/embed?src=c_03d981cefd8d6432894cb162da5c6186e393bc0f970ca6c371201aa05d30d763%40group.calendar.google.com
[GitHub milestones]: https://github.com/influxdata/telegraf/milestones
## How can I filter or select specific metrics?
Telegraf has options to select certain metrics or tags as well as filter out
specific tags or fields:
- **Selectors** allow a user to include or exclude entire metrics based on the
metric name or tag key/pair values.
- **Modifiers** allow a user to remove tags and fields based on specific keys,
with glob support.
For more details and examples, see the [Metric Filtering][metric filtering]
section in the docs.
## Could not find a usable config.yml, you may have revoked the CircleCI OAuth app
This is an error from CircleCI during test runs.
To resolve the error, you need to log back into CircleCI with your
username/password if that is how you log in or if you use GitHub log, re-create
your oauth/re-login with github.
That should regenerate your token and then allow you to push a commit or close
and reopen this PR and tests should run.
## What does "Context Deadline exceeded (Client.Timeout while awaiting headers)" mean?
This is a generic error received from Go's HTTP client. It is generally the
result of a network blip or hiccup as a result of a DNS, proxy, firewall,
and/or other network issue.
The error should be temporary and Telegraf will recover shortly after without
the loss of data.
## How do I set the timestamp format for parsing data?
Telegraf's `timestamp_format` config option requires the use
[Go's reference time][go ref time] to correctly translate the timestamp. For
example, if you have the time:
```s
2023-03-01T00:00:42.586+0800
```
A user needs the timestamp format:
```s
2006-01-02T15:04:05.000-0700
```
User's can try this out in the [Go playground][playground].
[go ref time]: https://pkg.go.dev/time#pkg-constants
[playground]: https://goplay.tools/snippet/hi9GIOG_gVQ
## Q: How can I monitor the Docker Engine Host from within a container?
You will need to setup several volume mounts as well as some environment
variables:
```shell
docker run --name telegraf \
-v /:/hostfs:ro \
-e HOST_ETC=/hostfs/etc \
-e HOST_PROC=/hostfs/proc \
-e HOST_SYS=/hostfs/sys \
-e HOST_VAR=/hostfs/var \
-e HOST_RUN=/hostfs/run \
-e HOST_MOUNT_PREFIX=/hostfs \
telegraf
```
## Q: Why do I get a "no such host" error resolving hostnames that other programs can resolve?
Go uses a pure Go resolver by default for [name resolution](https://golang.org/pkg/net/#hdr-Name_Resolution).
This resolver behaves differently than the C library functions but is more
efficient when used with the Go runtime.
If you encounter problems or want to use more advanced name resolution methods
that are unsupported by the pure Go resolver, you can switch to the cgo
resolver.
If running manually set:
```shell
export GODEBUG=netdns=cgo
```
If running as a service add the environment variable to `/etc/default/telegraf`:
```shell
GODEBUG=netdns=cgo
```
## Q: How can I manage series cardinality?
High [series cardinality][], when not properly managed, can cause high load on
your database. Telegraf attempts to avoid creating series with high
cardinality, but some monitoring workloads such as tracking containers are are
inherently high cardinality. These workloads can still be monitored, but care
must be taken to manage cardinality growth.
You can use the following techniques to avoid cardinality issues:
- Use [metric filtering][] options to exclude unneeded measurements and tags.
- Write to a database with an appropriate [retention policy][].
- Consider using the [Time Series Index][tsi].
- Monitor your databases using the [show cardinality][] commands.
- Consult the [InfluxDB documentation][influx docs] for the most up-to-date techniques.
[series cardinality]: https://docs.influxdata.com/influxdb/v1.7/concepts/glossary/#series-cardinality
[metric filtering]: https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering
[retention policy]: https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/
[tsi]: https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/
[show cardinality]: https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality
[influx docs]: https://docs.influxdata.com/influxdb/latest/

189
docs/INPUTS.md Normal file
View file

@ -0,0 +1,189 @@
# Input Plugins
This section is for developers who want to create new collection inputs.
Telegraf is entirely plugin driven. This interface allows for operators to
pick and chose what is gathered and makes it easy for developers
to create new ways of generating metrics.
Plugin authorship is kept as simple as possible to promote people to develop
and submit new inputs.
## Input Plugin Guidelines
- A plugin must conform to the [telegraf.Input][] interface.
- Input Plugins should call `inputs.Add` in their `init` function to register
themselves. See below for a quick example.
- To be available within Telegraf itself, plugins must register themselves
using a file in `github.com/influxdata/telegraf/plugins/inputs/all` named
according to the plugin name. Make sure you also add build-tags to
conditionally build the plugin.
- Each plugin requires a file called `sample.conf` containing the sample
configuration for the plugin in TOML format.
Please consult the [Sample Config][] page for the latest style guidelines.
- Each plugin `README.md` file should include the `sample.conf` file in a
section describing the configuration by specifying a `toml` section in the
form `toml @sample.conf`. The specified file(s) are then injected
automatically into the Readme.
- Follow the recommended [Code Style][].
[Sample Config]: /docs/developers/SAMPLE_CONFIG.md
[Code Style]: /docs/developers/CODE_STYLE.md
[telegraf.Input]: https://godoc.org/github.com/influxdata/telegraf#Input
### Typed Metrics
In addition to the `AddFields` function, the accumulator also supports
functions to add typed metrics: `AddGauge`, `AddCounter`, etc. Metric types
are ignored by the InfluxDB output, but can be used for other outputs, such as
[prometheus][prom metric types].
[prom metric types]: https://prometheus.io/docs/concepts/metric_types/
### Data Formats
Some input plugins, such as the [exec][] plugin, can accept any supported
[input data formats][].
In order to enable this, you must specify a `SetParser(parser parsers.Parser)`
function on the plugin object (see the exec plugin for an example), as well as
defining `parser` as a field of the object.
You can then utilize the parser internally in your plugin, parsing data as you
see fit. Telegraf's configuration layer will take care of instantiating and
creating the `Parser` object.
Add the following to the sample configuration in the README.md:
```toml
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```
[exec]: /plugins/inputs/exec
[input data formats]: /docs/DATA_FORMATS_INPUT.md
### Service Input Plugins
This section is for developers who want to create new "service" collection
inputs. A service plugin differs from a regular plugin in that it operates a
background service while Telegraf is running. One example would be the
`statsd` plugin, which operates a statsd server.
Service Input Plugins are substantially more complicated than a regular
plugin, as they will require threads and locks to verify data integrity.
Service Input Plugins should be avoided unless there is no way to create their
behavior with a regular plugin.
To create a Service Input implement the [telegraf.ServiceInput][] interface.
[telegraf.ServiceInput]: https://godoc.org/github.com/influxdata/telegraf#ServiceInput
### Metric Tracking
Metric Tracking provides a system to be notified when metrics have been
successfully written to their outputs or otherwise discarded. This allows
inputs to be created that function as reliable queue consumers.
Please note that this process applies only to internal plugins. For external
plugins, the metrics are acknowledged regardless of the actual output.
To get started with metric tracking begin by calling `WithTracking` on the
[telegraf.Accumulator][]. Add metrics using the `AddTrackingMetricGroup`
function on the returned [telegraf.TrackingAccumulator][] and store the
`TrackingID`. The `Delivered()` channel will return a type with information
about the final delivery status of the metric group.
Check the [amqp_consumer][] for an example implementation.
[telegraf.Accumulator]: https://godoc.org/github.com/influxdata/telegraf#Accumulator
[telegraf.TrackingAccumulator]: https://godoc.org/github.com/influxdata/telegraf#Accumulator
[amqp_consumer]: /plugins/inputs/amqp_consumer
### External Services
Plugins that connect or require the use of external services should ensure that
those servers are active. When may depend on the type of input plugin:
For service input plugins, `Init` should be used to check for configuration
issues (e.g. bad option) and for other non-recoverable errors. Then `Start`
is used to create connections or other retry-able operations.
For normal inputs, `Init` should also be used to check for configuration issues
as well as any other dependencies that the plugin will require. For example,
any binaries that must exist for the plugin to function. If making a connection,
this should also take place in `Init`.
Developers may find that they switch to using service input plugins more and
more to take advantage of the error on retry behavior features. This allows
the user to decide what to do on an error, like ignoring the plugin or retrying
constantly.
## Input Plugin Example
Let's say you've written a plugin that emits metrics about processes on the
current host.
### Register Plugin
Registration of the plugin on `plugins/inputs/all/simple.go`:
```go
//go:build !custom || inputs || inputs.simple
package all
import _ "github.com/influxdata/telegraf/plugins/inputs/simple" // register plugin
```
The _build-tags_ in the first line allow to selectively include/exclude your
plugin when customizing Telegraf.
### Plugin
Content of your plugin file e.g. `simple.go`
```go
//go:generate ../../../tools/readme_config_includer/generator
package simple
import (
_ "embed"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
//go:embed sample.conf
var sampleConfig string
type Simple struct {
Ok bool `toml:"ok"`
Log telegraf.Logger `toml:"-"`
}
func (*Simple) SampleConfig() string {
return sampleConfig
}
// Init is for setup, and validating config.
func (s *Simple) Init() error {
return nil
}
func (s *Simple) Gather(acc telegraf.Accumulator) error {
if s.Ok {
acc.AddFields("state", map[string]interface{}{"value": "pretty good"}, nil)
} else {
acc.AddFields("state", map[string]interface{}{"value": "not great"}, nil)
}
return nil
}
func init() {
inputs.Add("simple", func() telegraf.Input { return &Simple{} })
}
```

156
docs/INSTALL_GUIDE.md Normal file
View file

@ -0,0 +1,156 @@
# Installation
Telegraf compiles to a single static binary, which makes it easy to install.
Both InfluxData and the community provide for a wide range of methods to install
Telegraf from. For details on each release, view the [changelog][] for the
latest updates and changes by version.
[changelog]: /CHANGELOG.md
There are many places to obtain Telegraf from:
* [Binary downloads](#binary-downloads)
* [Homebrew](#homebrew)
* [InfluxData Linux package repository](#influxdata-linux-package-repository)
* [Official Docker images](#official-docker-images)
* [Helm charts](#helm-charts)
* [Nightly builds](#nightly-builds)
* [Build from source](#build-from-source)
* [Custom builder](#custom-builder)
## Binary downloads
Binary downloads for a wide range of architectures and operating systems are
available from the [InfluxData downloads][] page or from the
[GitHub Releases][] page.
[InfluxData downloads]: https://www.influxdata.com/downloads
[GitHub Releases]: https://github.com/influxdata/telegraf/releases
## Homebrew
A [Homebrew Formula][] for Telegraf that updates after each release:
```shell
brew update
brew install telegraf
```
Note that the Homebrew organization builds Telegraf itself and does not use
binaries built by InfluxData. This is important as Homebrew builds with CGO,
which means there are some differences between the official binaries and those
found with Homebrew.
[Homebrew Formula]: https://formulae.brew.sh/formula/telegraf
## InfluxData Linux package repository
InfluxData provides a package repo that contains both DEB and RPM packages.
### DEB
For DEB-based platforms (e.g. Ubuntu and Debian) run the following to add the
repo GPG key and setup a new sources.list entry:
```shell
# influxdata-archive_compat.key GPG fingerprint:
# 9D53 9D90 D332 8DC7 D6C8 D3B9 D8FF 8E1F 7DF8 B07E
wget -q https://repos.influxdata.com/influxdata-archive_compat.key
echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list
sudo apt-get update && sudo apt-get install telegraf
```
### RPM
For RPM-based platforms (e.g. RHEL, CentOS) use the following to create a repo
file and install telegraf:
```shell
# influxdata-archive_compat.key GPG fingerprint:
# 9D53 9D90 D332 8DC7 D6C8 D3B9 D8FF 8E1F 7DF8 B07E
cat <<EOF | sudo tee /etc/yum.repos.d/influxdata.repo
[influxdata]
name = InfluxData Repository - Stable
baseurl = https://repos.influxdata.com/stable/\$basearch/main
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key
EOF
sudo yum install telegraf
```
## Official Docker images
Telegraf is available as an [Official image][] on DockerHub. Official images
are a curated set of Docker Images that also automatically get security updates
from Docker, follow a set of best practices, and are available via a shortcut
syntax which omits the organization.
InfluxData maintains a Debian and Alpine based image across the last three
minor releases. To pull the latest Telegraf images:
```shell
# latest Debian-based image
docker pull telegraf
# latest Alpine-based image
docker pull telegraf:alpine
```
See the [Telegraf DockerHub][] page for complete details on available images,
versions, and tags.
[official image]: https://docs.docker.com/trusted-content/official-images/
[Telegraf DockerHub]: https://hub.docker.com/_/telegraf
## Helm charts
A community-supported [helm chart][] is also available:
```shell
helm repo add influxdata https://helm.influxdata.com/
helm search repo influxdata
```
[helm chart]: https://github.com/influxdata/helm-charts/tree/master/charts/telegraf
## Nightly builds
[Nightly builds][] are available and are generated from the master branch each
day at around midnight UTC. The artifacts include both binary packages, RPM &
DEB packages, as well as nightly Docker images that are hosted on [quay.io][].
[Nightly builds]: /docs/NIGHTLIES.md
[quay.io]: https://quay.io/repository/influxdb/telegraf-nightly?tab=tags&tag=latest
## Build from source
Telegraf generally follows the latest version of Go and requires GNU make to use
the Makefile for builds.
On Windows, the makefile requires the use of a bash terminal to support all
makefile targets. An easy option to get bash for windows is using the version
that comes with [git for windows](https://gitforwindows.org/).
1. [Install Go](https://golang.org/doc/install)
2. Clone the Telegraf repository:
```shell
git clone https://github.com/influxdata/telegraf.git
```
3. Run `make build` from the source directory
```shell
cd telegraf
make build
```
## Custom builder
Telegraf also provides a way of building a custom minimized binary using the
[custom builder][]. This takes a user's configuration file(s), determines what
plugins are required, and builds a binary with only those plugins. This greatly
reduces the size of the Telegraf binary.
[custom builder]: /tools/custom_builder

166
docs/INTEGRATION_TESTS.md Normal file
View file

@ -0,0 +1,166 @@
# Integration Tests
## Running
To run all named integration tests:
```shell
make test-integration
```
To run all tests, including unit and integration tests:
```shell
go test -count 1 -race ./...
```
## Developing
To run integration tests against a service the project uses
[testcontainers][1]. The makes it very easy to create and cleanup
container-based tests.
The `testutil/container.go` has a `Container` type that wraps this project to
easily create containers for testing in Telegraf. A typical test looks like
the following:
```go
servicePort := "5432"
container := testutil.Container{
Image: "postgres:alpine",
ExposedPorts: []string{servicePort},
Env: map[string]string{
"POSTGRES_HOST_AUTH_METHOD": "trust",
},
WaitingFor: wait.ForAll(
wait.ForLog("database system is ready to accept connections"),
wait.ForListeningPort(nat.Port(servicePort)),
),
}
err := container.Start()
require.NoError(t, err, "failed to start container")
defer func() {
require.NoError(t, container.Terminate(), "terminating container failed")
}()
```
User's should start the container and then defer termination of the container.
The `test.Container` type requires at least an image, ports to expose, and a
wait stanza. See the following to learn more:
### Images
Images are pulled from [DockerHub][2] by default. When looking for and
selecting an image from DockerHub, please use the following priority order:
1. [Official Images][3]: these images are generally produced by the publisher
themselves and are fully supported with great documentation. These images are
easy to spot as they do not have an author in the name (e.g. "mysql")
2. Publisher produced: not all software has an entry in the above Official
Images. This may be due to the project being smaller or moving faster. In
this case, pull directly from the publisher's DockerHub whenever possible.
3. [Bitnami][4]: If neither of the above images exist, look at the images
produced and maintained by Bitnami. They go to great efforts to create images
for the most popular software, produce great documentation, and ensure that
images are maintained.
4. Other images: If, and only if, none of the above images will work for a
particular use-case, then another image can be used. Be prepared to justify,
the use of these types of images.
### Ports
When the port is specified as a single value (e.g. `11211`) then testcontainers
will generate a random port for the service to start on. This way multiple
tests can be run and prevent ports from conflicting.
The test container will expect an array of ports to expose for testing. For
most tests only a single port is used, but a user can specify more than one
to allow for testing if another port is open for example.
On each container's DockerHub page, the README will usually specify what ports
are used by the container by default. For many containers this port can be
changed or specified with an environment variable.
If no ports are specified, a user can view the image tag and view the various
image layers. Find an image layer with the `EXPOSE` keyword to determine what
ports are used by the container.
### Wait Stanza
The wait stanza lays out what test containers will wait for to determine that
the container has started and is ready for use by the test. It is best to
provide not only a port, but also a log message. Ports can come up very early
in the container, and the service may not be ready.
To find a good log message, it is suggested to launch the container manually
and see what the final message is printed. Usually this is something to the
effect of "ready for connections" or "setup complete". Also ensure that this
message only shows up once, or the use of the
### Other Parameters
There are other optional parameters that user can make use of for additional
configuration of the test containers:
- `BindMounts`: used to mount local test data into the container. The order is
location in the container as the key and the local file as the value.
- `Entrypoint`: if a user wishes to override the entrypoint with a custom
command
- `Env`: to pass environmental variables to the container similar to Docker
CLI's `--env` option
- `Name`: if a container needs a hostname set or expects a certain name use
this option to set the containers hostname
- `Networks`: if the user creates a custom network
[1]: <https://golang.testcontainers.org/> "testcontainers-go"
[2]: <https://hub.docker.com/> "DockerHub"
[3]: <https://hub.docker.com/search?q=&type=image&image_filter=official> "DockerHub Official Images"
[4]: <https://hub.docker.com/u/bitnami> "Bitnami Images"
## Network
By default the containers will use the bridge network where other containers
cannot talk to each other.
If a custom network is required for running tests, for example if containers
do need to communicate, then users can set that up with the following code:
```go
networkName := "test-network"
net, err := testcontainers.GenericNetwork(ctx, testcontainers.GenericNetworkRequest{
NetworkRequest: testcontainers.NetworkRequest{
Name: networkName,
Attachable: true,
CheckDuplicate: true,
},
})
require.NoError(t, err)
defer func() {
require.NoError(t, net.Remove(ctx), "terminating network failed")
}()
```
Then specify the network name in the container startup:
```go
zookeeper := testutil.Container{
Image: "wurstmeister/zookeeper",
ExposedPorts: []string{"2181:2181"},
Networks: []string{networkName},
WaitingFor: wait.ForLog("binding to port"),
Name: "telegraf-test-zookeeper",
}
```
## Contributing
When adding integrations tests please do the following:
- Add integration to the end of the test name
- Use testcontainers when an external service is required
- Use the testutil.Container to setup and configure testcontainers
- Ensure the testcontainer wait stanza is well-tested

View file

@ -0,0 +1,480 @@
# Licenses of dependencies
When distributed in a binary form, Telegraf may contain portions of the
following works:
- cel.dev/expr [Apache License 2.0](https://github.com/google/cel-spec/blob/master/LICENSE)
- cloud.google.com/go [Apache License 2.0](https://github.com/googleapis/google-cloud-go/blob/master/LICENSE)
- code.cloudfoundry.org/clock [Apache License 2.0](https://github.com/cloudfoundry/clock/blob/master/LICENSE)
- collectd.org [ISC License](https://github.com/collectd/go-collectd/blob/master/LICENSE)
- dario.cat/mergo [BSD 3-Clause "New" or "Revised" License](https://github.com/imdario/mergo/blob/master/LICENSE)
- filippo.io/edwards25519 [BSD 3-Clause "New" or "Revised" License](https://github.com/FiloSottile/edwards25519/blob/main/LICENSE)
- github.com/99designs/keyring [MIT License](https://github.com/99designs/keyring/blob/master/LICENSE)
- github.com/Azure/azure-amqp-common-go [MIT License](https://github.com/Azure/azure-amqp-common-go/blob/master/LICENSE)
- github.com/Azure/azure-event-hubs-go [MIT License](https://github.com/Azure/azure-event-hubs-go/blob/master/LICENSE)
- github.com/Azure/azure-kusto-go [MIT License](https://github.com/Azure/azure-kusto-go/blob/master/LICENSE)
- github.com/Azure/azure-pipeline-go [MIT License](https://github.com/Azure/azure-pipeline-go/blob/master/LICENSE)
- github.com/Azure/azure-sdk-for-go [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/LICENSE.txt)
- github.com/Azure/azure-sdk-for-go/sdk/azcore [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azcore/LICENSE.txt)
- github.com/Azure/azure-sdk-for-go/sdk/azidentity [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azidentity/LICENSE.txt)
- github.com/Azure/azure-sdk-for-go/sdk/internal [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/internal/LICENSE.txt)
- github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/messaging/azeventhubs/LICENSE.txt)
- github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/monitor/armmonitor [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/monitor/armmonitor/LICENSE.txt)
- github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/resources/armresources/LICENSE.txt)
- github.com/Azure/azure-sdk-for-go/sdk/storage/azblob [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/storage/azblob/LICENSE.txt)
- github.com/Azure/azure-sdk-for-go/sdk/storage/azqueue [MIT License](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/storage/azqueue/LICENSE.txt)
- github.com/Azure/azure-storage-queue-go [MIT License](https://github.com/Azure/azure-storage-queue-go/blob/master/LICENSE)
- github.com/Azure/go-amqp [MIT License](https://github.com/Azure/go-amqp/blob/master/LICENSE)
- github.com/Azure/go-ansiterm [MIT License](https://github.com/Azure/go-ansiterm/blob/master/LICENSE)
- github.com/Azure/go-autorest [Apache License 2.0](https://github.com/Azure/go-autorest/blob/master/LICENSE)
- github.com/Azure/go-ntlmssp [MIT License](https://github.com/Azure/go-ntlmssp/blob/master/LICENSE)
- github.com/AzureAD/microsoft-authentication-library-for-go [MIT License](https://github.com/AzureAD/microsoft-authentication-library-for-go/blob/main/LICENSE)
- github.com/BurntSushi/toml [MIT License](https://github.com/BurntSushi/toml/blob/master/COPYING)
- github.com/ClickHouse/ch-go [Apache License 2.0](https://github.com/ClickHouse/ch-go/blob/main/LICENSE)
- github.com/ClickHouse/clickhouse-go [Apache License 2.0](https://github.com/ClickHouse/clickhouse-go/blob/master/LICENSE)
- github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp [Apache License 2.0](https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/blob/main/LICENSE)
- github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric [Apache License 2.0](https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/blob/main/LICENSE)
- github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping [Apache License 2.0](https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/blob/main/LICENSE)
- github.com/IBM/nzgo [MIT License](https://github.com/IBM/nzgo/blob/master/LICENSE.md)
- github.com/IBM/sarama [MIT License](https://github.com/IBM/sarama/blob/master/LICENSE.md)
- github.com/Masterminds/goutils [Apache License 2.0](https://github.com/Masterminds/goutils/blob/master/LICENSE.txt)
- github.com/Masterminds/semver [MIT License](https://github.com/Masterminds/semver/blob/master/LICENSE.txt)
- github.com/Masterminds/sprig [MIT License](https://github.com/Masterminds/sprig/blob/master/LICENSE.txt)
- github.com/Max-Sum/base32768 [MIT License](https://github.com/Max-Sum/base32768/blob/master/LICENSE)
- github.com/Mellanox/rdmamap [Apache License 2.0](https://github.com/Mellanox/rdmamap/blob/master/LICENSE)
- github.com/Microsoft/go-winio [MIT License](https://github.com/Microsoft/go-winio/blob/master/LICENSE)
- github.com/PaesslerAG/gval [BSD 3-Clause "New" or "Revised" License](https://github.com/PaesslerAG/gval/blob/master/LICENSE)
- github.com/SAP/go-hdb [Apache License 2.0](https://github.com/SAP/go-hdb/blob/main/LICENSE.md)
- github.com/abbot/go-http-auth [Apache License 2.0](https://github.com/abbot/go-http-auth/blob/master/LICENSE)
- github.com/aerospike/aerospike-client-go [Apache License 2.0](https://github.com/aerospike/aerospike-client-go/blob/master/LICENSE)
- github.com/alecthomas/participle [MIT License](https://github.com/alecthomas/participle/blob/master/COPYING)
- github.com/alecthomas/units [MIT License](https://github.com/alecthomas/units/blob/master/COPYING)
- github.com/alitto/pond [MIT License](https://github.com/alitto/pond/blob/master/LICENSE)
- github.com/aliyun/alibaba-cloud-sdk-go [Apache License 2.0](https://github.com/aliyun/alibaba-cloud-sdk-go/blob/master/LICENSE)
- github.com/amir/raidman [The Unlicense](https://github.com/amir/raidman/blob/master/UNLICENSE)
- github.com/andybalholm/brotli [MIT License](https://github.com/andybalholm/brotli/blob/master/LICENSE)
- github.com/antchfx/jsonquery [MIT License](https://github.com/antchfx/jsonquery/blob/master/LICENSE)
- github.com/antchfx/xmlquery [MIT License](https://github.com/antchfx/xmlquery/blob/master/LICENSE)
- github.com/antchfx/xpath [MIT License](https://github.com/antchfx/xpath/blob/master/LICENSE)
- github.com/antlr4-go/antlr [BSD 3-Clause "New" or "Revised" License](https://github.com/antlr/antlr4/blob/master/LICENSE.txt)
- github.com/apache/arrow-go [Apache License 2.0](https://github.com/apache/arrow-go/blob/main/LICENSE.txt)
- github.com/apache/arrow/go [Apache License 2.0](https://github.com/apache/arrow/blob/master/LICENSE.txt)
- github.com/apache/iotdb-client-go [Apache License 2.0](https://github.com/apache/iotdb-client-go/blob/main/LICENSE)
- github.com/apache/thrift [Apache License 2.0](https://github.com/apache/thrift/blob/master/LICENSE)
- github.com/apapsch/go-jsonmerge [MIT License](https://github.com/apapsch/go-jsonmerge/blob/master/LICENSE)
- github.com/aristanetworks/glog [Apache License 2.0](https://github.com/aristanetworks/glog/blob/master/LICENSE)
- github.com/aristanetworks/goarista [Apache License 2.0](https://github.com/aristanetworks/goarista/blob/master/COPYING)
- github.com/armon/go-metrics [MIT License](https://github.com/armon/go-metrics/blob/master/LICENSE)
- github.com/awnumar/memcall [Apache License 2.0](https://github.com/awnumar/memcall/blob/master/LICENSE)
- github.com/awnumar/memguard [Apache License 2.0](https://github.com/awnumar/memguard/blob/master/LICENSE)
- github.com/aws/aws-sdk-go-v2 [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/aws/protocol/eventstream/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/config [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/config/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/credentials [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/credentials/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/feature/ec2/imds [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/feature/ec2/imds/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/feature/s3/manager [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/feature/s3/manager/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/internal/configsources [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/internal/configsources/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/internal/endpoints [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/internal/endpoints/v2/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/internal/ini [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/internal/ini/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/internal/v4a [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/internal/v4a/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/cloudwatch [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/cloudwatch/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/cloudwatchlogs/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/dynamodb [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/dynamodb/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/ec2 [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/ec2/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/internal/accept-encoding/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/internal/checksum [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/internal/checksum/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/internal/endpoint-discovery/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/internal/presigned-url [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/internal/presigned-url/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/internal/s3shared [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/internal/s3shared/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/kinesis [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/kinesis/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/s3 [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/s3/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/sso [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/ec2/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/ssooidc [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/ssooidc/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/sts [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/sts/LICENSE.txt)
- github.com/aws/aws-sdk-go-v2/service/timestreamwrite [Apache License 2.0](https://github.com/aws/aws-sdk-go-v2/blob/main/service/timestreamwrite/LICENSE.txt)
- github.com/aws/smithy-go [Apache License 2.0](https://github.com/aws/smithy-go/blob/main/LICENSE)
- github.com/benbjohnson/clock [MIT License](https://github.com/benbjohnson/clock/blob/master/LICENSE)
- github.com/beorn7/perks [MIT License](https://github.com/beorn7/perks/blob/master/LICENSE)
- github.com/blues/jsonata-go [MIT License](https://github.com/blues/jsonata-go/blob/main/LICENSE)
- github.com/bmatcuk/doublestar [MIT License](https://github.com/bmatcuk/doublestar/blob/master/LICENSE)
- github.com/boschrexroth/ctrlx-datalayer-golang [MIT License](https://github.com/boschrexroth/ctrlx-datalayer-golang/blob/main/LICENSE)
- github.com/brutella/dnssd [MIT License](https://github.com/brutella/dnssd/blob/master/LICENSE)
- github.com/bufbuild/protocompile [Apache License 2.0](https://github.com/bufbuild/protocompile/blob/main/LICENSE)
- github.com/caio/go-tdigest [MIT License](https://github.com/caio/go-tdigest/blob/master/LICENSE)
- github.com/cenkalti/backoff [MIT License](https://github.com/cenkalti/backoff/blob/master/LICENSE)
- github.com/cespare/xxhash [MIT License](https://github.com/cespare/xxhash/blob/master/LICENSE.txt)
- github.com/cisco-ie/nx-telemetry-proto [Apache License 2.0](https://github.com/cisco-ie/nx-telemetry-proto/blob/master/LICENSE)
- github.com/clarify/clarify-go [Apache License 2.0](https://github.com/clarify/clarify-go/blob/master/LICENSE)
- github.com/cloudevents/sdk-go [Apache License 2.0](https://github.com/cloudevents/sdk-go/blob/main/LICENSE)
- github.com/cncf/xds/go [Apache License 2.0](https://github.com/cncf/xds/blob/main/LICENSE)
- github.com/compose-spec/compose-go [Apache License 2.0](https://github.com/compose-spec/compose-go/blob/master/LICENSE)
- github.com/containerd/log [Apache License 2.0](https://github.com/containerd/log/blob/main/LICENSE)
- github.com/containerd/platforms [Apache License 2.0](https://github.com/containerd/platforms/blob/main/LICENSE)
- github.com/coocood/freecache [MIT License](https://github.com/coocood/freecache/blob/master/LICENSE)
- github.com/coreos/go-semver [Apache License 2.0](https://github.com/coreos/go-semver/blob/main/LICENSE)
- github.com/coreos/go-systemd [Apache License 2.0](https://github.com/coreos/go-systemd/blob/main/LICENSE)
- github.com/couchbase/go-couchbase [MIT License](https://github.com/couchbase/go-couchbase/blob/master/LICENSE)
- github.com/couchbase/gomemcached [MIT License](https://github.com/couchbase/gomemcached/blob/master/LICENSE)
- github.com/couchbase/goutils [Apache License 2.0](https://github.com/couchbase/goutils/blob/master/LICENSE.md)
- github.com/cpuguy83/dockercfg [MIT License](https://github.com/cpuguy83/dockercfg/blob/main/LICENSE)
- github.com/cpuguy83/go-md2man [MIT License](https://github.com/cpuguy83/go-md2man/blob/master/LICENSE.md)
- github.com/danieljoos/wincred [MIT License](https://github.com/danieljoos/wincred/blob/master/LICENSE)
- github.com/datadope-io/go-zabbix [MIT License](https://github.com/datadope-io/go-zabbix/blob/master/LICENSE)
- github.com/davecgh/go-spew [ISC License](https://github.com/davecgh/go-spew/blob/master/LICENSE)
- github.com/devigned/tab [MIT License](https://github.com/devigned/tab/blob/master/LICENSE)
- github.com/dgryski/go-rendezvous [MIT License](https://github.com/dgryski/go-rendezvous/blob/master/LICENSE)
- github.com/digitalocean/go-libvirt [Apache License 2.0](https://github.com/digitalocean/go-libvirt/blob/master/LICENSE.md)
- github.com/dimchansky/utfbom [Apache License 2.0](https://github.com/dimchansky/utfbom/blob/master/LICENSE)
- github.com/distribution/reference [Apache License 2.0](https://github.com/distribution/reference/blob/main/LICENSE)
- github.com/djherbis/times [MIT License](https://github.com/djherbis/times/blob/master/LICENSE)
- github.com/docker/docker [Apache License 2.0](https://github.com/docker/docker/blob/master/LICENSE)
- github.com/docker/go-connections [Apache License 2.0](https://github.com/docker/go-connections/blob/master/LICENSE)
- github.com/docker/go-units [Apache License 2.0](https://github.com/docker/go-units/blob/master/LICENSE)
- github.com/dustin/go-humanize [MIT License](https://github.com/dustin/go-humanize/blob/master/LICENSE)
- github.com/dvsekhvalnov/jose2go [MIT License](https://github.com/dvsekhvalnov/jose2go/blob/master/LICENSE)
- github.com/dynatrace-oss/dynatrace-metric-utils-go [Apache License 2.0](https://github.com/dynatrace-oss/dynatrace-metric-utils-go/blob/master/LICENSE)
- github.com/eapache/go-resiliency [MIT License](https://github.com/eapache/go-resiliency/blob/master/LICENSE)
- github.com/eapache/go-xerial-snappy [MIT License](https://github.com/eapache/go-xerial-snappy/blob/master/LICENSE)
- github.com/eapache/queue [MIT License](https://github.com/eapache/queue/blob/master/LICENSE)
- github.com/ebitengine/purego [Apache License 2.0](https://github.com/ebitengine/purego/blob/main/LICENSE)
- github.com/eclipse/paho.golang [Eclipse Public License - v 2.0](https://github.com/eclipse/paho.golang/blob/master/LICENSE)
- github.com/eclipse/paho.mqtt.golang [Eclipse Public License - v 2.0](https://github.com/eclipse/paho.mqtt.golang/blob/master/LICENSE)
- github.com/emicklei/go-restful [MIT License](https://github.com/emicklei/go-restful/blob/v3/LICENSE)
- github.com/envoyproxy/go-control-plane/envoy [Apache License 2.0](https://github.com/envoyproxy/go-control-plane/blob/main/LICENSE)
- github.com/envoyproxy/protoc-gen-validate [Apache License 2.0](https://github.com/bufbuild/protoc-gen-validate/blob/main/LICENSE)
- github.com/facebook/time [Apache License 2.0](https://github.com/facebook/time/blob/main/LICENSE)
- github.com/fatih/color [MIT License](https://github.com/fatih/color/blob/master/LICENSE.md)
- github.com/felixge/httpsnoop [MIT License](https://github.com/felixge/httpsnoop/blob/master/LICENSE.txt)
- github.com/fxamacker/cbor [MIT License](https://github.com/fxamacker/cbor/blob/master/LICENSE)
- github.com/gabriel-vasile/mimetype [MIT License](https://github.com/gabriel-vasile/mimetype/blob/master/LICENSE)
- github.com/go-asn1-ber/asn1-ber [MIT License](https://github.com/go-asn1-ber/asn1-ber/blob/v1.3/LICENSE)
- github.com/go-chi/chi [MIT License](https://github.com/go-chi/chi/blob/master/LICENSE)
- github.com/go-faster/city [MIT License](https://github.com/go-faster/city/blob/main/LICENSE)
- github.com/go-faster/errors [BSD 3-Clause "New" or "Revised" License](https://github.com/go-faster/errors/blob/main/LICENSE)
- github.com/go-git/go-billy [Apache License 2.0](https://github.com/go-git/go-billy/blob/master/LICENSE)
- github.com/go-jose/go-jose [Apache License 2.0](https://github.com/go-jose/go-jose/blob/main/LICENSE)
- github.com/go-ldap/ldap [MIT License](https://github.com/go-ldap/ldap/blob/v3.4.1/LICENSE)
- github.com/go-logfmt/logfmt [MIT License](https://github.com/go-logfmt/logfmt/blob/master/LICENSE)
- github.com/go-logr/logr [Apache License 2.0](https://github.com/go-logr/logr/blob/master/LICENSE)
- github.com/go-logr/stdr [Apache License 2.0](https://github.com/go-logr/stdr/blob/master/LICENSE)
- github.com/go-ole/go-ole [MIT License](https://github.com/go-ole/go-ole/blob/master/LICENSE)
- github.com/go-openapi/jsonpointer [Apache License 2.0](https://github.com/go-openapi/jsonpointer/blob/master/LICENSE)
- github.com/go-openapi/jsonreference [Apache License 2.0](https://github.com/go-openapi/jsonreference/blob/master/LICENSE)
- github.com/go-openapi/swag [Apache License 2.0](https://github.com/go-openapi/swag/blob/master/LICENSE)
- github.com/go-redis/redis [BSD 2-Clause "Simplified" License](https://github.com/go-redis/redis/blob/master/LICENSE)
- github.com/go-sql-driver/mysql [Mozilla Public License 2.0](https://github.com/go-sql-driver/mysql/blob/master/LICENSE)
- github.com/go-stack/stack [MIT License](https://github.com/go-stack/stack/blob/master/LICENSE.md)
- github.com/go-stomp/stomp [Apache License 2.0](https://github.com/go-stomp/stomp/blob/master/LICENSE.txt)
- github.com/gobwas/glob [MIT License](https://github.com/gobwas/glob/blob/master/LICENSE)
- github.com/goccy/go-json [MIT License](https://github.com/goccy/go-json/blob/master/LICENSE)
- github.com/godbus/dbus [BSD 2-Clause "Simplified" License](https://github.com/godbus/dbus/blob/master/LICENSE)
- github.com/gofrs/uuid [MIT License](https://github.com/gofrs/uuid/blob/master/LICENSE)
- github.com/gogo/protobuf [BSD 3-Clause Clear License](https://github.com/gogo/protobuf/blob/master/LICENSE)
- github.com/golang-jwt/jwt [MIT License](https://github.com/golang-jwt/jwt/blob/main/LICENSE)
- github.com/golang-sql/civil [Apache License 2.0](https://github.com/golang-sql/civil/blob/master/LICENSE)
- github.com/golang-sql/sqlexp [BSD 3-Clause "New" or "Revised" License](https://github.com/golang-sql/sqlexp/blob/master/LICENSE)
- github.com/golang/geo [Apache License 2.0](https://github.com/golang/geo/blob/master/LICENSE)
- github.com/golang/groupcache [Apache License 2.0](https://github.com/golang/groupcache/blob/master/LICENSE)
- github.com/golang/protobuf [BSD 3-Clause "New" or "Revised" License](https://github.com/golang/protobuf/blob/master/LICENSE)
- github.com/golang/snappy [BSD 3-Clause "New" or "Revised" License](https://github.com/golang/snappy/blob/master/LICENSE)
- github.com/google/cel-go [Apache License 2.0](https://github.com/google/cel-go/blob/master/LICENSE)
- github.com/google/flatbuffers [Apache License 2.0](https://github.com/google/flatbuffers/blob/master/LICENSE)
- github.com/google/gnostic-models [Apache License 2.0](https://github.com/google/gnostic-models/blob/master/LICENSE)
- github.com/google/gnxi [Apache License 2.0](https://github.com/google/gnxi/blob/master/LICENSE)
- github.com/google/go-cmp [BSD 3-Clause "New" or "Revised" License](https://github.com/google/go-cmp/blob/master/LICENSE)
- github.com/google/go-github [BSD 3-Clause "New" or "Revised" License](https://github.com/google/go-github/blob/master/LICENSE)
- github.com/google/go-querystring [BSD 3-Clause "New" or "Revised" License](https://github.com/google/go-querystring/blob/master/LICENSE)
- github.com/google/go-tpm [Apache License 2.0](https://github.com/google/go-tpm/blob/main/LICENSE)
- github.com/google/s2a-go [Apache License 2.0](https://github.com/google/s2a-go/blob/main/LICENSE.md)
- github.com/google/uuid [BSD 3-Clause "New" or "Revised" License](https://github.com/google/uuid/blob/master/LICENSE)
- github.com/googleapis/enterprise-certificate-proxy [Apache License 2.0](https://github.com/googleapis/enterprise-certificate-proxy/blob/main/LICENSE)
- github.com/googleapis/gax-go [BSD 3-Clause "New" or "Revised" License](https://github.com/googleapis/gax-go/blob/master/LICENSE)
- github.com/gopacket/gopacket [BSD 3-Clause "New" or "Revised" License](https://github.com/gopacket/gopacket/blob/master/LICENSE)
- github.com/gopcua/opcua [MIT License](https://github.com/gopcua/opcua/blob/master/LICENSE)
- github.com/gophercloud/gophercloud [Apache License 2.0](https://github.com/gophercloud/gophercloud/blob/master/LICENSE)
- github.com/gorcon/rcon [MIT License](https://github.com/gorcon/rcon/blob/master/LICENSE)
- github.com/gorilla/mux [BSD 3-Clause "New" or "Revised" License](https://github.com/gorilla/mux/blob/master/LICENSE)
- github.com/gorilla/websocket [BSD 2-Clause "Simplified" License](https://github.com/gorilla/websocket/blob/master/LICENSE)
- github.com/gosnmp/gosnmp [BSD 2-Clause "Simplified" License](https://github.com/gosnmp/gosnmp/blob/master/LICENSE)
- github.com/grafana/regexp [BSD 3-Clause "New" or "Revised" License](https://github.com/grafana/regexp/blob/main/LICENSE)
- github.com/grid-x/modbus [BSD 3-Clause "New" or "Revised" License](https://github.com/grid-x/modbus/blob/master/LICENSE)
- github.com/grid-x/serial [MIT License](https://github.com/grid-x/serial/blob/master/LICENSE)
- github.com/grpc-ecosystem/grpc-gateway [BSD 3-Clause "New" or "Revised" License](https://github.com/grpc-ecosystem/grpc-gateway/blob/main/LICENSE)
- github.com/gsterjov/go-libsecret [MIT License](https://github.com/gsterjov/go-libsecret/blob/master/LICENSE)
- github.com/gwos/tcg/sdk [MIT License](https://github.com/gwos/tcg/blob/master/LICENSE)
- github.com/hailocab/go-hostpool [MIT License](https://github.com/hailocab/go-hostpool/blob/master/LICENSE)
- github.com/hashicorp/consul/api [Mozilla Public License 2.0](https://github.com/hashicorp/consul/blob/main/api/LICENSE)
- github.com/hashicorp/errwrap [Mozilla Public License 2.0](https://github.com/hashicorp/errwrap/blob/master/LICENSE)
- github.com/hashicorp/go-cleanhttp [Mozilla Public License 2.0](https://github.com/hashicorp/go-cleanhttp/blob/master/LICENSE)
- github.com/hashicorp/go-hclog [MIT License](https://github.com/hashicorp/go-hclog/blob/main/LICENSE)
- github.com/hashicorp/go-immutable-radix [Mozilla Public License 2.0](https://github.com/hashicorp/go-immutable-radix/blob/master/LICENSE)
- github.com/hashicorp/go-multierror [Mozilla Public License 2.0](https://github.com/hashicorp/go-multierror/blob/master/LICENSE)
- github.com/hashicorp/go-rootcerts [Mozilla Public License 2.0](https://github.com/hashicorp/go-rootcerts/blob/master/LICENSE)
- github.com/hashicorp/go-uuid [Mozilla Public License 2.0](https://github.com/hashicorp/go-uuid/blob/master/LICENSE)
- github.com/hashicorp/golang-lru [Mozilla Public License 2.0](https://github.com/hashicorp/golang-lru/blob/master/LICENSE)
- github.com/hashicorp/packer-plugin-sdk [Mozilla Public License 2.0](https://github.com/hashicorp/packer-plugin-sdk/blob/main/LICENSE)
- github.com/hashicorp/serf [Mozilla Public License 2.0](https://github.com/hashicorp/serf/blob/master/LICENSE)
- github.com/huandu/xstrings [MIT License](https://github.com/huandu/xstrings/blob/master/LICENSE)
- github.com/imdario/mergo [BSD 3-Clause "New" or "Revised" License](https://github.com/imdario/mergo/blob/master/LICENSE)
- github.com/influxdata/influxdb-observability/common [MIT License](https://github.com/influxdata/influxdb-observability/blob/main/LICENSE)
- github.com/influxdata/influxdb-observability/influx2otel [MIT License](https://github.com/influxdata/influxdb-observability/blob/main/LICENSE)
- github.com/influxdata/influxdb-observability/otel2influx [MIT License](https://github.com/influxdata/influxdb-observability/blob/main/LICENSE)
- github.com/influxdata/line-protocol [MIT License](https://github.com/influxdata/line-protocol/blob/v2/LICENSE)
- github.com/influxdata/tail [MIT License](https://github.com/influxdata/tail/blob/master/LICENSE.txt)
- github.com/influxdata/toml [MIT License](https://github.com/influxdata/toml/blob/master/LICENSE)
- github.com/intel/iaevents [Apache License 2.0](https://github.com/intel/iaevents/blob/main/LICENSE)
- github.com/intel/powertelemetry [Apache License 2.0](https://github.com/intel/powertelemetry/blob/main/LICENSE)
- github.com/jackc/chunkreader [MIT License](https://github.com/jackc/chunkreader/blob/master/LICENSE)
- github.com/jackc/pgconn [MIT License](https://github.com/jackc/pgconn/blob/master/LICENSE)
- github.com/jackc/pgio [MIT License](https://github.com/jackc/pgio/blob/master/LICENSE)
- github.com/jackc/pgpassfile [MIT License](https://github.com/jackc/pgpassfile/blob/master/LICENSE)
- github.com/jackc/pgproto3 [MIT License](https://github.com/jackc/pgproto3/blob/master/LICENSE)
- github.com/jackc/pgservicefile [MIT License](https://github.com/jackc/pgservicefile/blob/master/LICENSE)
- github.com/jackc/pgtype [MIT License](https://github.com/jackc/pgtype/blob/master/LICENSE)
- github.com/jackc/pgx [MIT License](https://github.com/jackc/pgx/blob/master/LICENSE)
- github.com/jackc/puddle [MIT License](https://github.com/jackc/puddle/blob/master/LICENSE)
- github.com/jaegertracing/jaeger [Apache License 2.0](https://github.com/jaegertracing/jaeger/blob/master/LICENSE)
- github.com/jcmturner/aescts [Apache License 2.0](https://github.com/jcmturner/aescts/blob/master/LICENSE)
- github.com/jcmturner/dnsutils [Apache License 2.0](https://github.com/jcmturner/dnsutils/blob/master/LICENSE)
- github.com/jcmturner/gofork [BSD 3-Clause "New" or "Revised" License](https://github.com/jcmturner/gofork/blob/master/LICENSE)
- github.com/jcmturner/goidentity [Apache License 2.0](https://github.com/jcmturner/goidentity/blob/master/LICENSE)
- github.com/jcmturner/gokrb5 [Apache License 2.0](https://github.com/jcmturner/gokrb5/blob/master/LICENSE)
- github.com/jcmturner/rpc [Apache License 2.0](https://github.com/jcmturner/rpc/blob/master/LICENSE)
- github.com/jedib0t/go-pretty [MIT License](https://github.com/jedib0t/go-pretty/blob/main/LICENSE)
- github.com/jeremywohl/flatten [MIT License](https://github.com/jeremywohl/flatten/blob/master/LICENSE)
- github.com/jmespath/go-jmespath [Apache License 2.0](https://github.com/jmespath/go-jmespath/blob/master/LICENSE)
- github.com/jmhodges/clock [MIT License](https://github.com/jmhodges/clock/blob/main/LICENSE)
- github.com/josharian/intern [MIT License](https://github.com/josharian/intern/blob/master/LICENSE.md)
- github.com/josharian/native [MIT License](https://github.com/josharian/native/blob/main/license)
- github.com/jpillora/backoff [MIT License](https://github.com/jpillora/backoff/blob/master/LICENSE)
- github.com/json-iterator/go [MIT License](https://github.com/json-iterator/go/blob/master/LICENSE)
- github.com/jzelinskie/whirlpool [BSD 3-Clause "New" or "Revised" License](https://github.com/jzelinskie/whirlpool/blob/master/LICENSE)
- github.com/karrick/godirwalk [BSD 2-Clause "Simplified" License](https://github.com/karrick/godirwalk/blob/master/LICENSE)
- github.com/kballard/go-shellquote [MIT License](https://github.com/kballard/go-shellquote/blob/master/LICENSE)
- github.com/klauspost/compress [BSD 3-Clause Clear License](https://github.com/klauspost/compress/blob/master/LICENSE)
- github.com/klauspost/cpuid [MIT License](https://github.com/klauspost/cpuid/blob/master/LICENSE)
- github.com/klauspost/pgzip [MIT License](https://github.com/klauspost/pgzip/blob/master/LICENSE)
- github.com/kolo/xmlrpc [MIT License](https://github.com/kolo/xmlrpc/blob/master/LICENSE)
- github.com/kr/fs [BSD 3-Clause "New" or "Revised" License](https://github.com/kr/fs/blob/main/LICENSE)
- github.com/kylelemons/godebug [Apache License 2.0](https://github.com/kylelemons/godebug/blob/master/LICENSE)
- github.com/leodido/go-syslog [MIT License](https://github.com/influxdata/go-syslog/blob/develop/LICENSE)
- github.com/leodido/ragel-machinery [MIT License](https://github.com/leodido/ragel-machinery/blob/develop/LICENSE)
- github.com/linkedin/goavro [Apache License 2.0](https://github.com/linkedin/goavro/blob/master/LICENSE)
- github.com/logzio/azure-monitor-metrics-receiver [MIT License](https://github.com/logzio/azure-monitor-metrics-receiver/blob/master/LICENSE)
- github.com/magiconair/properties [BSD 2-Clause "Simplified" License](https://github.com/magiconair/properties/blob/main/LICENSE.md)
- github.com/mailru/easyjson [MIT License](https://github.com/mailru/easyjson/blob/master/LICENSE)
- github.com/mattn/go-colorable [MIT License](https://github.com/mattn/go-colorable/blob/master/LICENSE)
- github.com/mattn/go-ieproxy [MIT License](https://github.com/mattn/go-ieproxy/blob/master/LICENSE)
- github.com/mattn/go-isatty [MIT License](https://github.com/mattn/go-isatty/blob/master/LICENSE)
- github.com/mattn/go-runewidth [MIT License](https://github.com/mattn/go-runewidth/blob/master/LICENSE)
- github.com/mdlayher/apcupsd [MIT License](https://github.com/mdlayher/apcupsd/blob/master/LICENSE.md)
- github.com/mdlayher/genetlink [MIT License](https://github.com/mdlayher/genetlink/blob/master/LICENSE.md)
- github.com/mdlayher/netlink [MIT License](https://github.com/mdlayher/netlink/blob/master/LICENSE.md)
- github.com/mdlayher/socket [MIT License](https://github.com/mdlayher/socket/blob/master/LICENSE.md)
- github.com/mdlayher/vsock [MIT License](https://github.com/mdlayher/vsock/blob/main/LICENSE.md)
- github.com/microsoft/ApplicationInsights-Go [MIT License](https://github.com/microsoft/ApplicationInsights-Go/blob/master/LICENSE)
- github.com/microsoft/go-mssqldb [BSD 3-Clause "New" or "Revised" License](https://github.com/microsoft/go-mssqldb/blob/master/LICENSE.txt)
- github.com/miekg/dns [BSD 3-Clause Clear License](https://github.com/miekg/dns/blob/master/LICENSE)
- github.com/minio/highwayhash [Apache License 2.0](https://github.com/minio/highwayhash/blob/master/LICENSE)
- github.com/mitchellh/copystructure [MIT License](https://github.com/mitchellh/copystructure/blob/master/LICENSE)
- github.com/mitchellh/go-homedir [MIT License](https://github.com/mitchellh/go-homedir/blob/master/LICENSE)
- github.com/mitchellh/mapstructure [MIT License](https://github.com/mitchellh/mapstructure/blob/master/LICENSE)
- github.com/mitchellh/reflectwalk [MIT License](https://github.com/mitchellh/reflectwalk/blob/master/LICENSE)
- github.com/moby/docker-image-spec [Apache License 2.0](https://github.com/moby/docker-image-spec/blob/main/LICENSE)
- github.com/moby/go-archive [Apache License 2.0](https://github.com/moby/go-archive/blob/main/LICENSE)
- github.com/moby/ipvs [Apache License 2.0](https://github.com/moby/ipvs/blob/master/LICENSE)
- github.com/moby/patternmatcher [Apache License 2.0](https://github.com/moby/patternmatcher/blob/main/LICENSE)
- github.com/moby/sys/sequential [Apache License 2.0](https://github.com/moby/sys/blob/main/LICENSE)
- github.com/moby/sys/user [Apache License 2.0](https://github.com/moby/sys/blob/main/LICENSE)
- github.com/moby/sys/userns [Apache License 2.0](https://github.com/moby/sys/blob/main/LICENSE)
- github.com/moby/term [Apache License 2.0](https://github.com/moby/term/blob/master/LICENSE)
- github.com/modern-go/concurrent [Apache License 2.0](https://github.com/modern-go/concurrent/blob/master/LICENSE)
- github.com/modern-go/reflect2 [Apache License 2.0](https://github.com/modern-go/reflect2/blob/master/LICENSE)
- github.com/montanaflynn/stats [MIT License](https://github.com/montanaflynn/stats/blob/master/LICENSE)
- github.com/morikuni/aec [MIT License](https://github.com/morikuni/aec/blob/master/LICENSE)
- github.com/mtibben/percent [MIT License](https://github.com/mtibben/percent/blob/master/LICENSE)
- github.com/multiplay/go-ts3 [BSD 2-Clause "Simplified" License](https://github.com/multiplay/go-ts3/blob/master/LICENSE)
- github.com/munnerz/goautoneg [BSD 3-Clause Clear License](https://github.com/munnerz/goautoneg/blob/master/LICENSE)
- github.com/naoina/go-stringutil [MIT License](https://github.com/naoina/go-stringutil/blob/master/LICENSE)
- github.com/nats-io/jwt [Apache License 2.0](https://github.com/nats-io/jwt/blob/master/LICENSE)
- github.com/nats-io/nats-server [Apache License 2.0](https://github.com/nats-io/nats-server/blob/master/LICENSE)
- github.com/nats-io/nats.go [Apache License 2.0](https://github.com/nats-io/nats.go/blob/master/LICENSE)
- github.com/nats-io/nkeys [Apache License 2.0](https://github.com/nats-io/nkeys/blob/master/LICENSE)
- github.com/nats-io/nuid [Apache License 2.0](https://github.com/nats-io/nuid/blob/master/LICENSE)
- github.com/ncruces/go-strftime [MIT License](https://github.com/ncruces/go-strftime/blob/main/LICENSE)
- github.com/ncw/swift [MIT License](https://github.com/ncw/swift/blob/master/COPYING)
- github.com/netsampler/goflow2 [BSD 3-Clause "New" or "Revised" License](https://github.com/netsampler/goflow2/blob/main/LICENSE)
- github.com/newrelic/newrelic-telemetry-sdk-go [Apache License 2.0](https://github.com/newrelic/newrelic-telemetry-sdk-go/blob/master/LICENSE.md)
- github.com/nsqio/go-nsq [MIT License](https://github.com/nsqio/go-nsq/blob/master/LICENSE)
- github.com/nwaples/tacplus [BSD 2-Clause "Simplified" License](https://github.com/nwaples/tacplus/blob/master/LICENSE)
- github.com/oapi-codegen/runtime [Apache License 2.0](https://github.com/oapi-codegen/runtime/blob/main/LICENSE)
- github.com/olivere/elastic [MIT License](https://github.com/olivere/elastic/blob/release-branch.v7/LICENSE)
- github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatautil [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/LICENSE)
- github.com/openconfig/gnmi [Apache License 2.0](https://github.com/openconfig/gnmi/blob/master/LICENSE)
- github.com/openconfig/goyang [Apache License 2.0](https://github.com/openconfig/goyang/blob/master/LICENSE)
- github.com/opencontainers/go-digest [Apache License 2.0](https://github.com/opencontainers/go-digest/blob/master/LICENSE)
- github.com/opencontainers/image-spec [Apache License 2.0](https://github.com/opencontainers/image-spec/blob/master/LICENSE)
- github.com/opensearch-project/opensearch-go [Apache License 2.0](https://github.com/opensearch-project/opensearch-go/blob/main/LICENSE.txt)
- github.com/opentracing/opentracing-go [Apache License 2.0](https://github.com/opentracing/opentracing-go/blob/master/LICENSE)
- github.com/p4lang/p4runtime [Apache License 2.0](https://github.com/p4lang/p4runtime/blob/main/LICENSE)
- github.com/paulmach/orb [MIT License](https://github.com/paulmach/orb/blob/master/LICENSE.md)
- github.com/pavlo-v-chernykh/keystore-go [MIT License](https://github.com/pavlo-v-chernykh/keystore-go/blob/master/LICENSE)
- github.com/pborman/ansi [BSD 3-Clause "New" or "Revised" License](https://github.com/pborman/ansi/blob/master/LICENSE)
- github.com/pcolladosoto/goslurm [MIT License](https://github.com/pcolladosoto/goslurm/blob/main/LICENSE)
- github.com/peterbourgon/unixtransport [Apache License 2.0](https://github.com/peterbourgon/unixtransport/blob/main/LICENSE)
- github.com/philhofer/fwd [MIT License](https://github.com/philhofer/fwd/blob/master/LICENSE.md)
- github.com/pierrec/lz4 [BSD 3-Clause "New" or "Revised" License](https://github.com/pierrec/lz4/blob/master/LICENSE)
- github.com/pion/dtls [MIT License](https://github.com/pion/dtls/blob/master/LICENSES/MIT.txt)
- github.com/pion/logging [MIT License](https://github.com/pion/logging/blob/master/LICENSES/MIT.txt)
- github.com/pion/transport [MIT License](https://github.com/pion/transport/blob/master/LICENSES/MIT.txt)
- github.com/pkg/browser [BSD 2-Clause "Simplified" License](https://github.com/pkg/browser/blob/master/LICENSE)
- github.com/pkg/errors [BSD 2-Clause "Simplified" License](https://github.com/pkg/errors/blob/master/LICENSE)
- github.com/pkg/sftp [BSD 2-Clause "Simplified" License](https://github.com/pkg/sftp/blob/master/LICENSE)
- github.com/pkg/xattr [BSD 2-Clause "Simplified" License](https://github.com/pkg/xattr/blob/master/LICENSE)
- github.com/pmezard/go-difflib [BSD 3-Clause Clear License](https://github.com/pmezard/go-difflib/blob/master/LICENSE)
- github.com/prometheus-community/pro-bing [MIT License](https://github.com/prometheus-community/pro-bing/blob/main/LICENSE)
- github.com/prometheus/client_golang [Apache License 2.0](https://github.com/prometheus/client_golang/blob/master/LICENSE)
- github.com/prometheus/client_model [Apache License 2.0](https://github.com/prometheus/client_model/blob/master/LICENSE)
- github.com/prometheus/common [Apache License 2.0](https://github.com/prometheus/common/blob/master/LICENSE)
- github.com/prometheus/procfs [Apache License 2.0](https://github.com/prometheus/procfs/blob/master/LICENSE)
- github.com/prometheus/prometheus [Apache License 2.0](https://github.com/prometheus/prometheus/blob/master/LICENSE)
- github.com/rabbitmq/amqp091-go [BSD 2-Clause "Simplified" License](https://github.com/rabbitmq/amqp091-go/blob/main/LICENSE)
- github.com/rclone/rclone [MIT License](https://github.com/rclone/rclone/blob/master/COPYING)
- github.com/rcrowley/go-metrics [BSD 2-Clause with views sentence](https://github.com/rcrowley/go-metrics/blob/master/LICENSE)
- github.com/redis/go-redis [BSD 2-Clause "Simplified" License](https://github.com/redis/go-redis/blob/master/LICENSE)
- github.com/remyoudompheng/bigfft [BSD 3-Clause "New" or "Revised" License](https://github.com/remyoudompheng/bigfft/blob/master/LICENSE)
- github.com/rfjakob/eme [MIT License](https://github.com/rfjakob/eme/blob/master/LICENSE)
- github.com/riemann/riemann-go-client [MIT License](https://github.com/riemann/riemann-go-client/blob/master/LICENSE)
- github.com/rivo/uniseg [MIT License](https://github.com/rivo/uniseg/blob/master/LICENSE.txt)
- github.com/robbiet480/go.nut [MIT License](https://github.com/robbiet480/go.nut/blob/master/LICENSE)
- github.com/robinson/gos7 [BSD 3-Clause "New" or "Revised" License](https://github.com/robinson/gos7/blob/master/LICENSE)
- github.com/russross/blackfriday [BSD 2-Clause "Simplified" License](https://github.com/russross/blackfriday/blob/master/LICENSE.txt)
- github.com/safchain/ethtool [Apache License 2.0](https://github.com/safchain/ethtool/blob/master/LICENSE)
- github.com/samber/lo [MIT License](https://github.com/samber/lo/blob/master/LICENSE)
- github.com/seancfoley/bintree [Apache License 2.0](https://github.com/seancfoley/bintree/blob/master/LICENSE)
- github.com/seancfoley/ipaddress-go [Apache License 2.0](https://github.com/seancfoley/ipaddress-go/blob/master/LICENSE)
- github.com/segmentio/asm [MIT License](https://github.com/segmentio/asm/blob/main/LICENSE)
- github.com/shirou/gopsutil [BSD 3-Clause Clear License](https://github.com/shirou/gopsutil/blob/master/LICENSE)
- github.com/shopspring/decimal [MIT License](https://github.com/shopspring/decimal/blob/master/LICENSE)
- github.com/showwin/speedtest-go [MIT License](https://github.com/showwin/speedtest-go/blob/master/LICENSE)
- github.com/signalfx/com_signalfx_metrics_protobuf [Apache License 2.0](https://github.com/signalfx/com_signalfx_metrics_protobuf/blob/master/LICENSE)
- github.com/signalfx/gohistogram [MIT License](https://github.com/signalfx/gohistogram/blob/master/LICENSE)
- github.com/signalfx/golib [Apache License 2.0](https://github.com/signalfx/golib/blob/master/LICENSE)
- github.com/signalfx/sapm-proto [Apache License 2.0](https://github.com/signalfx/sapm-proto/blob/master/LICENSE)
- github.com/sijms/go-ora [MIT License](https://github.com/sijms/go-ora/blob/master/LICENSE)
- github.com/sirupsen/logrus [MIT License](https://github.com/sirupsen/logrus/blob/master/LICENSE)
- github.com/sleepinggenius2/gosmi [MIT License](https://github.com/sleepinggenius2/gosmi/blob/master/LICENSE)
- github.com/snowflakedb/gosnowflake [Apache License 2.0](https://github.com/snowflakedb/gosnowflake/blob/master/LICENSE)
- github.com/spf13/cast [MIT License](https://github.com/spf13/cast/blob/master/LICENSE)
- github.com/spf13/pflag [BSD 3-Clause "New" or "Revised" License](https://github.com/spf13/pflag/blob/master/LICENSE)
- github.com/spiffe/go-spiffe [Apache License 2.0](https://github.com/spiffe/go-spiffe/blob/main/LICENSE)
- github.com/srebhan/cborquery [MIT License](https://github.com/srebhan/cborquery/blob/main/LICENSE)
- github.com/srebhan/protobufquery [MIT License](https://github.com/srebhan/protobufquery/blob/master/LICENSE)
- github.com/stoewer/go-strcase [MIT License](https://github.com/stoewer/go-strcase/blob/master/LICENSE)
- github.com/stretchr/objx [MIT License](https://github.com/stretchr/objx/blob/master/LICENSE)
- github.com/stretchr/testify [MIT License](https://github.com/stretchr/testify/blob/master/LICENSE)
- github.com/tdrn-org/go-hue [MIT License](https://github.com/tdrn-org/go-log/blob/main/LICENSE)
- github.com/tdrn-org/go-nsdp [MIT License](https://github.com/tdrn-org/go-nsdp/blob/main/LICENSE)
- github.com/testcontainers/testcontainers-go [MIT License](https://github.com/testcontainers/testcontainers-go/blob/main/LICENSE)
- github.com/thomasklein94/packer-plugin-libvirt [Mozilla Public License 2.0](https://github.com/thomasklein94/packer-plugin-libvirt/blob/main/LICENSE)
- github.com/tidwall/gjson [MIT License](https://github.com/tidwall/gjson/blob/master/LICENSE)
- github.com/tidwall/match [MIT License](https://github.com/tidwall/match/blob/master/LICENSE)
- github.com/tidwall/pretty [MIT License](https://github.com/tidwall/pretty/blob/master/LICENSE)
- github.com/tidwall/tinylru [MIT License](https://github.com/tidwall/tinylru/blob/master/LICENSE)
- github.com/tidwall/wal [MIT License](https://github.com/tidwall/wal/blob/master/LICENSE)
- github.com/tinylib/msgp [MIT License](https://github.com/tinylib/msgp/blob/master/LICENSE)
- github.com/tklauser/go-sysconf [BSD 3-Clause "New" or "Revised" License](https://github.com/tklauser/go-sysconf/blob/master/LICENSE)
- github.com/tklauser/numcpus [Apache License 2.0](https://github.com/tklauser/numcpus/blob/master/LICENSE)
- github.com/twmb/murmur3 [BSD 3-Clause "New" or "Revised" License](https://github.com/twmb/murmur3/blob/master/LICENSE)
- github.com/uber/jaeger-client-go [Apache License 2.0](https://github.com/jaegertracing/jaeger-client-go/blob/master/LICENSE)
- github.com/uber/jaeger-lib [Apache License 2.0](https://github.com/jaegertracing/jaeger-lib/blob/main/LICENSE)
- github.com/urfave/cli [MIT License](https://github.com/urfave/cli/blob/main/LICENSE)
- github.com/vapourismo/knx-go [MIT License](https://github.com/vapourismo/knx-go/blob/master/LICENSE)
- github.com/vishvananda/netlink [Apache License 2.0](https://github.com/vishvananda/netlink/blob/master/LICENSE)
- github.com/vishvananda/netns [Apache License 2.0](https://github.com/vishvananda/netns/blob/master/LICENSE)
- github.com/vjeantet/grok [Apache License 2.0](https://github.com/vjeantet/grok/blob/master/LICENSE)
- github.com/vmware/govmomi [Apache License 2.0](https://github.com/vmware/govmomi/blob/master/LICENSE.txt)
- github.com/wavefronthq/wavefront-sdk-go [Apache License 2.0](https://github.com/wavefrontHQ/wavefront-sdk-go/blob/master/LICENSE)
- github.com/x448/float16 [MIT License](https://github.com/x448/float16/blob/master/LICENSE)
- github.com/xanzy/ssh-agent [Apache License 2.0](https://github.com/xanzy/ssh-agent/blob/main/LICENSE)
- github.com/xdg-go/pbkdf2 [Apache License 2.0](https://github.com/xdg-go/pbkdf2/blob/main/LICENSE)
- github.com/xdg-go/scram [Apache License 2.0](https://github.com/xdg-go/scram/blob/master/LICENSE)
- github.com/xdg-go/stringprep [Apache License 2.0](https://github.com/xdg-go/stringprep/blob/master/LICENSE)
- github.com/xdg/scram [Apache License 2.0](https://github.com/xdg-go/scram/blob/master/LICENSE)
- github.com/xdg/stringprep [Apache License 2.0](https://github.com/xdg-go/stringprep/blob/master/LICENSE)
- github.com/xrash/smetrics [MIT License](https://github.com/xrash/smetrics/blob/master/LICENSE)
- github.com/youmark/pkcs8 [MIT License](https://github.com/youmark/pkcs8/blob/master/LICENSE)
- github.com/yuin/gopher-lua [MIT License](https://github.com/yuin/gopher-lua/blob/master/LICENSE)
- github.com/yusufpapurcu/wmi [MIT License](https://github.com/yusufpapurcu/wmi/blob/master/LICENSE)
- github.com/zeebo/errs [MIT License](https://github.com/zeebo/errs/blob/master/LICENSE)
- github.com/zeebo/xxh3 [BSD 2-Clause "Simplified" License](https://github.com/zeebo/xxh3/blob/master/LICENSE)
- go.mongodb.org/mongo-driver [Apache License 2.0](https://github.com/mongodb/mongo-go-driver/blob/master/LICENSE)
- go.opencensus.io [Apache License 2.0](https://github.com/census-instrumentation/opencensus-go/blob/master/LICENSE)
- go.opentelemetry.io/auto/sdk [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go-instrumentation/blob/main/sdk/LICENSE)
- go.opentelemetry.io/collector/consumer [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-collector/blob/main/LICENSE)
- go.opentelemetry.io/collector/pdata [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-collector/blob/main/LICENSE)
- go.opentelemetry.io/collector/semconv [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-collector/blob/main/LICENSE)
- go.opentelemetry.io/contrib/detectors/gcp [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/LICENSE)
- go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/LICENSE)
- go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/LICENSE)
- go.opentelemetry.io/otel [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go/blob/main/LICENSE)
- go.opentelemetry.io/otel/metric [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go/blob/main/LICENSE)
- go.opentelemetry.io/otel/sdk [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go/blob/main/LICENSE)
- go.opentelemetry.io/otel/sdk/metric [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go/blob/main/LICENSE)
- go.opentelemetry.io/otel/trace [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-go/blob/main/LICENSE)
- go.opentelemetry.io/proto/otlp [Apache License 2.0](https://github.com/open-telemetry/opentelemetry-proto-go/blob/main/LICENSE)
- go.starlark.net [BSD 3-Clause "New" or "Revised" License](https://github.com/google/starlark-go/blob/master/LICENSE)
- go.step.sm/crypto [Apache License 2.0](https://github.com/smallstep/crypto/blob/master/LICENSE)
- go.uber.org/atomic [MIT License](https://pkg.go.dev/go.uber.org/atomic?tab=licenses)
- go.uber.org/multierr [MIT License](https://pkg.go.dev/go.uber.org/multierr?tab=licenses)
- go.uber.org/zap [MIT License](https://pkg.go.dev/go.uber.org/zap?tab=licenses)
- golang.org/x/crypto [BSD 3-Clause Clear License](https://github.com/golang/crypto/blob/master/LICENSE)
- golang.org/x/exp [BSD 3-Clause Clear License](https://github.com/golang/exp/blob/master/LICENSE)
- golang.org/x/net [BSD 3-Clause Clear License](https://github.com/golang/net/blob/master/LICENSE)
- golang.org/x/oauth2 [BSD 3-Clause "New" or "Revised" License](https://github.com/golang/oauth2/blob/master/LICENSE)
- golang.org/x/sync [BSD 3-Clause "New" or "Revised" License](https://github.com/golang/sync/blob/master/LICENSE)
- golang.org/x/sys [BSD 3-Clause Clear License](https://github.com/golang/sys/blob/master/LICENSE)
- golang.org/x/term [BSD 3-Clause License](https://pkg.go.dev/golang.org/x/term?tab=licenses)
- golang.org/x/text [BSD 3-Clause Clear License](https://github.com/golang/text/blob/master/LICENSE)
- golang.org/x/time [BSD 3-Clause Clear License](https://github.com/golang/time/blob/master/LICENSE)
- golang.org/x/xerrors [BSD 3-Clause Clear License](https://github.com/golang/xerrors/blob/master/LICENSE)
- golang.zx2c4.com/wireguard [MIT License](https://github.com/WireGuard/wgctrl-go/blob/master/LICENSE.md)
- golang.zx2c4.com/wireguard/wgctrl [MIT License](https://github.com/WireGuard/wgctrl-go/blob/master/LICENSE.md)
- gonum.org/v1/gonum [BSD 3-Clause "New" or "Revised" License](https://github.com/gonum/gonum/blob/master/LICENSE)
- google.golang.org/api [BSD 3-Clause "New" or "Revised" License](https://github.com/googleapis/google-api-go-client/blob/master/LICENSE)
- google.golang.org/genproto [Apache License 2.0](https://github.com/google/go-genproto/blob/master/LICENSE)
- google.golang.org/genproto/googleapis/api [Apache License 2.0](https://pkg.go.dev/google.golang.org/genproto/googleapis/api?tab=licenses)
- google.golang.org/genproto/googleapis/rpc [Apache License 2.0](https://pkg.go.dev/google.golang.org/genproto/googleapis/rpc?tab=licenses)
- google.golang.org/grpc [Apache License 2.0](https://github.com/grpc/grpc-go/blob/master/LICENSE)
- google.golang.org/protobuf [BSD 3-Clause "New" or "Revised" License](https://pkg.go.dev/google.golang.org/protobuf?tab=licenses)
- gopkg.in/evanphx/json-patch.v4 [BSD 3-Clause "New" or "Revised" License](https://github.com/evanphx/json-patch/blob/master/LICENSE)
- gopkg.in/fatih/pool.v2 [MIT License](https://github.com/fatih/pool/blob/v2.0.0/LICENSE)
- gopkg.in/fsnotify.v1 [BSD 3-Clause "New" or "Revised" License](https://github.com/fsnotify/fsnotify/blob/v1.4.7/LICENSE)
- gopkg.in/gorethink/gorethink.v3 [Apache License 2.0](https://github.com/rethinkdb/rethinkdb-go/blob/v3.0.5/LICENSE)
- gopkg.in/inf.v0 [BSD 3-Clause "New" or "Revised" License](https://github.com/go-inf/inf/blob/v0.9.1/LICENSE)
- gopkg.in/ini.v1 [Apache License 2.0](https://github.com/go-ini/ini/blob/master/LICENSE)
- gopkg.in/olivere/elastic.v5 [MIT License](https://github.com/olivere/elastic/blob/v5.0.76/LICENSE)
- gopkg.in/tomb.v1 [BSD 3-Clause Clear License](https://github.com/go-tomb/tomb/blob/v1/LICENSE)
- gopkg.in/tomb.v2 [BSD 3-Clause Clear License](https://github.com/go-tomb/tomb/blob/v2/LICENSE)
- gopkg.in/yaml.v2 [Apache License 2.0](https://github.com/go-yaml/yaml/blob/v2.2.2/LICENSE)
- gopkg.in/yaml.v3 [MIT License](https://github.com/go-yaml/yaml/blob/v3/LICENSE)
- k8s.io/api [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- k8s.io/apimachinery [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- k8s.io/client-go [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- k8s.io/klog [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- k8s.io/kube-openapi [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- k8s.io/utils [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- layeh.com/radius [Mozilla Public License 2.0](https://github.com/layeh/radius/blob/master/LICENSE)
- modernc.org/libc [BSD 3-Clause "New" or "Revised" License](https://gitlab.com/cznic/libc/-/blob/master/LICENSE)
- modernc.org/mathutil [BSD 3-Clause "New" or "Revised" License](https://gitlab.com/cznic/mathutil/-/blob/master/LICENSE)
- modernc.org/memory [BSD 3-Clause "New" or "Revised" License](https://gitlab.com/cznic/memory/-/blob/master/LICENSE)
- modernc.org/sqlite [BSD 3-Clause "New" or "Revised" License](https://gitlab.com/cznic/sqlite/-/blob/master/LICENSE)
- sigs.k8s.io/json [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- sigs.k8s.io/randfill [Apache License 2.0](https://github.com/kubernetes-sigs/randfill/blob/main/LICENSE)
- sigs.k8s.io/structured-merge-diff [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- sigs.k8s.io/yaml [Apache License 2.0](https://github.com/kubernetes/client-go/blob/master/LICENSE)
- software.sslmate.com/src/go-pkcs12 [BSD 3-Clause "New" or "Revised" License](https://github.com/SSLMate/go-pkcs12/blob/master/LICENSE)
## Telegraf used and modified code from these projects
- github.com/DataDog/datadog-agent [Apache License 2.0](https://github.com/DataDog/datadog-agent/blob/main/LICENSE)

50
docs/METRICS.md Normal file
View file

@ -0,0 +1,50 @@
# Metrics
Telegraf metrics are the internal representation used to model data during
processing. Metrics are closely based on InfluxDB's data model and contain
four main components:
- **Measurement Name**: Description and namespace for the metric.
- **Tags**: Key/Value string pairs and usually used to identify the
metric.
- **Fields**: Key/Value pairs that are typed and usually contain the
metric data.
- **Timestamp**: Date and time associated with the fields.
This metric type exists only in memory and must be converted to a concrete
representation in order to be transmitted or viewed. To achieve this we
provide several [output data formats][] sometimes referred to as
*serializers*. Our default serializer converts to [InfluxDB Line
Protocol][line protocol] which provides a high performance and one-to-one
direct mapping from Telegraf metrics.
[output data formats]: /docs/DATA_FORMATS_OUTPUT.md
[line protocol]: /plugins/serializers/influx
## Tracking Metrics
Tracking metrics are metrics that ensure that data is passed from the input and
handed to an output before acknowledging the message back to the input. The
use case for these types of metrics is to ensure that the message makes it to
the destination before removing the metric from the input.
For example, if a configuration is reading from MQTT, Kafka, or an AMQP source
Telegraf will read the message and wait till the metric is handed to the output
before telling the metric source that the message was read. If Telegraf were to
stop or the system running Telegraf to crash, this allows the messages that
were not completely delivered to an output to get re-read at a later date.
Please note that this process applies only to internal plugins. For external
plugins, the metrics are acknowledged regardless of the actual output.
### Undelivered Messages
When an input uses tracking metrics, an additional setting,
`max_undelivered_messages`, is available in that plugin. This setting
determines how many metrics should be read in before reading additional
messages. In practice, this means that Telegraf may not read new messages from
an input at every collection interval.
Users need to use caution with this setting. Setting the value too high may
mean that Telegraf pushes constant batches to an output, ignoring the flush
interval.

32
docs/NIGHTLIES.md Normal file
View file

@ -0,0 +1,32 @@
# Nightly Builds
These builds are generated from the master branch at midnight UTC:
| DEB | RPM | TAR GZ | ZIP |
| --------------- | --------------- | ------------------------------| --- |
| [amd64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_amd64.deb) | [aarch64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.aarch64.rpm) | [darwin_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_darwin_amd64.tar.gz) | [windows_amd64.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_amd64.zip) |
| [arm64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_arm64.deb) | [armel.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.armel.rpm) | [darwin_arm64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_darwin_arm64.tar.gz) | [windows_arm64.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_arm64.zip) |
| [armel.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_armel.deb) | [armv6hl.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.armv6hl.rpm) | [freebsd_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_amd64.tar.gz) | [windows_i386.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_i386.zip) |
| [armhf.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_armhf.deb) | [i386.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.i386.rpm) | [freebsd_armv7.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_armv7.tar.gz) | |
| [i386.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_i386.deb) | [loong64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.loong64.rpm) | [freebsd_i386.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_i386.tar.gz) | |
| [loong64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_loong64.deb) | [ppc64le.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.ppc64le.rpm) | [linux_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_amd64.tar.gz) | |
| [mips.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_mips.deb) | [riscv64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.riscv64.rpm) | [linux_arm64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_arm64.tar.gz) | |
| [mipsel.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_mipsel.deb) | [s390x.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.s390x.rpm) | [linux_armel.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_armel.tar.gz) | |
| [ppc64el.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_ppc64el.deb) | [x86_64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.x86_64.rpm) | [linux_armhf.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_armhf.tar.gz) | |
| [riscv64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_riscv64.deb) | | [linux_i386.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_i386.tar.gz) | |
| [s390x.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_s390x.deb) | | [linux_loong64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_loong64.tar.gz) | |
| | | [linux_mips.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_mips.tar.gz) | |
| | | [linux_mipsel.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_mipsel.tar.gz) | |
| | | [linux_ppc64le.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_ppc64le.tar.gz) | |
| | | [linux_riscv64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_riscv64.tar.gz) | |
| | | [linux_s390x.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_s390x.tar.gz) | |
Nightly docker images are available on [quay.io](https://quay.io/repository/influxdb/telegraf-nightly?tab=tags):
```shell
# Debian-based image
docker pull quay.io/influxdb/telegraf-nightly:latest
# Alpine-based image
docker pull quay.io/influxdb/telegraf-nightly:alpine
```

150
docs/OUTPUTS.md Normal file
View file

@ -0,0 +1,150 @@
# Output Plugins
This section is for developers who want to create a new output sink. Outputs
are created in a similar manner as collection plugins, and their interface has
similar constructs.
## Output Plugin Guidelines
- An output must conform to the [telegraf.Output][] interface.
- Outputs should call `outputs.Add` in their `init` function to register
themselves. See below for a quick example.
- To be available within Telegraf itself, plugins must register themselves
using a file in `github.com/influxdata/telegraf/plugins/outputs/all` named
according to the plugin name. Make sure you also add build-tags to
conditionally build the plugin.
- Each plugin requires a file called `sample.conf` containing the sample
configuration for the plugin in TOML format.
Please consult the [Sample Config][] page for the latest style guidelines.
- Each plugin `README.md` file should include the `sample.conf` file in a
section describing the configuration by specifying a `toml` section in the
form `toml @sample.conf`. The specified file(s) are then injected
automatically into the Readme.
- Follow the recommended [Code Style][].
[Sample Config]: /docs/developers/SAMPLE_CONFIG.md
[Code Style]: /docs/developers/CODE_STYLE.md
[telegraf.Output]: https://godoc.org/github.com/influxdata/telegraf#Output
## Data Formats
Some output plugins, such as the [file][] plugin, can write in any supported
[output data formats][].
In order to enable this, you must specify a
`SetSerializer(serializer serializers.Serializer)`
function on the plugin object (see the file plugin for an example), as well as
defining `serializer` as a field of the object.
You can then utilize the serializer internally in your plugin, serializing data
before it's written. Telegraf's configuration layer will take care of
instantiating and creating the `Serializer` object.
You should also add the following to your `SampleConfig()`:
```toml
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```
[file]: /plugins/inputs/file
[output data formats]: /docs/DATA_FORMATS_OUTPUT.md
## Flushing Metrics to Outputs
Metrics are flushed to outputs when any of the following events happen:
- `flush_interval + rand(flush_jitter)` has elapsed since start or the last
flush interval
- At least `metric_batch_size` count of metrics are waiting in the buffer
- The telegraf process has received a SIGUSR1 signal
Note that if the flush takes longer than the `agent.interval` to write the
metrics to the output, user will see a message saying the output:
> did not complete within its flush interval
This may mean the output is not keeping up with the flow of metrics, and you may
want to look into enabling compression, reducing the size of your metrics or
investigate other reasons why the writes might be taking longer than expected.
## Output Plugin Example
## Registration
Registration of the plugin on `plugins/outputs/all/simpleoutput.go`:
```go
//go:build !custom || outputs || outputs.simpleoutput
package all
import _ "github.com/influxdata/telegraf/plugins/outputs/simpleoutput" // register plugin
```
The _build-tags_ in the first line allow to selectively include/exclude your
plugin when customizing Telegraf.
## Plugin
Content of your plugin file e.g. `simpleoutput.go`
```go
//go:generate ../../../tools/readme_config_includer/generator
package simpleoutput
// simpleoutput.go
import (
_ "embed"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
)
//go:embed sample.conf
var sampleConfig string
type Simple struct {
Ok bool `toml:"ok"`
Log telegraf.Logger `toml:"-"`
}
func (*Simple) SampleConfig() string {
return sampleConfig
}
// Init is for setup, and validating config.
func (s *Simple) Init() error {
return nil
}
func (s *Simple) Connect() error {
// Make any connection required here
return nil
}
func (s *Simple) Close() error {
// Close any connections here.
// Write will not be called once Close is called, so there is no need to synchronize.
return nil
}
// Write should write immediately to the output, and not buffer writes
// (Telegraf manages the buffer for you). Returning an error will fail this
// batch of writes and the entire batch will be retried automatically.
func (s *Simple) Write(metrics []telegraf.Metric) error {
for _, metric := range metrics {
// write `metric` to the output sink here
}
return nil
}
func init() {
outputs.Add("simpleoutput", func() telegraf.Output { return &Simple{} })
}
```

243
docs/PARSING_DATA.md Normal file
View file

@ -0,0 +1,243 @@
# Parsing Data
Telegraf has the ability to take data in a variety of formats. Telegraf requires
configuration from the user in order to correctly parse, store, and send the
original data. Telegraf does not take the raw data and maintain it internally.
Telegraf uses an internal metric representation consisting of the metric name,
tags, fields and a timestamp, very similar to [line protocol][]. This
means that data needs to be broken up into a metric name, tags, fields, and a
timestamp. While none of these options are required, they are available to
the user and might be necessary to ensure the data is represented correctly.
[line protocol]: https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/
## Parsers
The first step is to determine which parser to use. Look at the list of
[parsers][] and find one that will work with the user's data. This is generally
straightforward as the data-type will only have one parser that is actually
applicable to the data.
[parsers]: https://github.com/influxdata/telegraf/tree/master/plugins/parsers
### JSON parsers
There is an exception when it comes to JSON data. Instead of a single parser,
there are three different parsers capable of reading JSON data:
* `json`: This parser is great for flat JSON data. If the JSON is more complex
and for example, has other objects or nested arrays, then do not use this and
look at the other two options.
* `json_v2`: The v2 parser was created out of the need to parse JSON objects. It
can take on more advanced cases, at the cost of additional configuration.
* `xpath_json`: The xpath parser is the most capable of the three options. While
the xpath name may imply XML data, it can parse a variety of data types using
XPath expressions.
## Tags and fields
The next step is to look at the data and determine how the data needs to be
split up between tags and fields. Tags are generally strings or values that a
user will want to search on. While fields are the raw data values, numeric
types, etc. Generally, data is considered to be a field unless otherwise
specified as a tag.
## Timestamp
To parse a timestamp, at the very least the users needs to specify which field
has the timestamp and what the format of the timestamp is. The format can either
be a predefined Unix timestamp or parsed using a custom format based on Go
reference time.
For Unix timestamps Telegraf understands the following settings:
| Timestamp | Timestamp Format |
|-----------------------|------------------|
| `1709572232` | `unix` |
| `1709572232123` | `unix_ms` |
| `1709572232123456` | `unix_us` |
| `1709572232123456789` | `unix_ns` |
There are some named formats available as well:
| Timestamp | Named Format |
|---------------------------------------|---------------|
| `Mon Jan _2 15:04:05 2006` | `ANSIC` |
| `Mon Jan _2 15:04:05 MST 2006` | `UnixDate` |
| `Mon Jan 02 15:04:05 -0700 2006` | `RubyDate` |
| `02 Jan 06 15:04 MST` | `RFC822` |
| `02 Jan 06 15:04 -0700` | `RFC822Z` |
| `Monday, 02-Jan-06 15:04:05 MST` | `RFC850` |
| `Mon, 02 Jan 2006 15:04:05 MST` | `RFC1123` |
| `Mon, 02 Jan 2006 15:04:05 -0700` | `RFC1123Z` |
| `2006-01-02T15:04:05Z07:00` | `RFC3339` |
| `2006-01-02T15:04:05.999999999Z07:00` | `RFC3339Nano` |
| `Jan _2 15:04:05` | `Stamp` |
| `Jan _2 15:04:05.000` | `StampMilli` |
| `Jan _2 15:04:05.000000` | `StampMicro` |
| `Jan _2 15:04:05.000000000` | `StampNano` |
If the timestamp does not conform to any of the above, then the user can specify
a custom timestamp format, in which the user must provide the timestamp in
[Go reference time][] notation. Here are a few example timestamps and their Go
reference time equivalent:
| Timestamp | Go reference time |
|-------------------------------|-------------------------------|
| `2024-03-04T17:10:32` | `2006-01-02T15:04:05` |
| `04 Mar 24 10:10 -0700` | `02 Jan 06 15:04 -0700` |
| `2024-03-04T10:10:32Z07:00` | `2006-01-02T15:04:05Z07:00` |
| `2024-03-04 17:10:32.123+00` | `2006-01-02 15:04:05.999+00` |
| `2024-03-04T10:10:32.123456Z` | `2006-01-02T15:04:05.000000Z` |
| `2024-03-04T10:10:32.123456Z` | `2006-01-02T15:04:05.999999999Z` |
Note for fractional second values, the user can use either a `9` or `0`. Using a
`0` forces a certain length, but using `9`s do not.
Please note, that timezone abbreviations are ambiguous! For example `MST`, can
stand for either Mountain Standard Time (UTC-07) or Malaysia Standard Time
(UTC+08). As such, avoid abbreviated timezones if possible.
Unix timestamps use UTC, there is no concept of a timezone for a Unix timestamp.
[Go reference time]: https://pkg.go.dev/time#pkg-constants
## Examples
Below are a few basic examples to get users started.
### CSV
Given the following data:
```csv
node,temp,humidity,alarm,time
node1,32.3,23,false,2023-03-06T16:52:23Z
node2,22.6,44,false,2023-03-06T16:52:23Z
node3,17.9,56,true,2023-03-06T16:52:23Z
```
Here is corresponding parser configuration and result:
```toml
[[inputs.file]]
files = ["test.csv"]
data_format = "csv"
csv_header_row_count = 1
csv_column_names = ["node","temp","humidity","alarm","time"]
csv_tag_columns = ["node"]
csv_timestamp_column = "time"
csv_timestamp_format = "2006-01-02T15:04:05Z"
```
```text
file,node=node1 temp=32.3,humidity=23i,alarm=false 1678121543000000000
file,node=node2 temp=22.6,humidity=44i,alarm=false 1678121543000000000
file,node=node3 temp=17.9,humidity=56i,alarm=true 1678121543000000000
```
### JSON flat data
Given the following data:
```json
{ "node": "node", "temp": 32.3, "humidity": 23, "alarm": false, "time": "1709572232123456789"}
```
Here is corresponding parser configuration:
```toml
[[inputs.file]]
files = ["test.json"]
precision = "1ns"
data_format = "json"
tag_keys = ["node"]
json_time_key = "time"
json_time_format = "unix_ns"
```
```text
file,node=node temp=32.3,humidity=23 1709572232123456789
```
### JSON Objects
Given the following data:
```json
{
"metrics": [
{ "node": "node1", "temp": 32.3, "humidity": 23, "alarm": "false", "time": "1678121543"},
{ "node": "node2", "temp": 22.6, "humidity": 44, "alarm": "false", "time": "1678121543"},
{ "node": "node3", "temp": 17.9, "humidity": 56, "alarm": "true", "time": "1678121543"}
]
}
```
Here is corresponding parser configuration:
```toml
[[inputs.file]]
files = ["test.json"]
data_format = "json_v2"
[[inputs.file.json_v2]]
[[inputs.file.json_v2.object]]
path = "metrics"
timestamp_key = "time"
timestamp_format = "unix"
[[inputs.file.json_v2.object.tag]]
path = "#.node"
[[inputs.file.json_v2.object.field]]
path = "#.temp"
type = "float"
[[inputs.file.json_v2.object.field]]
path = "#.humidity"
type = "int"
[[inputs.file.json_v2.object.field]]
path = "#.alarm"
type = "bool"
```
```text
file,node=node1 temp=32.3,humidity=23i,alarm=false 1678121543000000000
file,node=node2 temp=22.6,humidity=44i,alarm=false 1678121543000000000
file,node=node3 temp=17.9,humidity=56i,alarm=true 1678121543000000000
```
### JSON Line Protocol
Given the following data:
```json
{
"fields": {"temp": 32.3, "humidity": 23, "alarm": false},
"name": "measurement",
"tags": {"node": "node1"},
"time": "2024-03-04T10:10:32.123456Z"
}
```
Here is corresponding parser configuration:
```toml
[[inputs.file]]
files = ["test.json"]
precision = "1us"
data_format = "xpath_json"
[[inputs.file.xpath]]
metric_name = "/name"
field_selection = "fields/*"
tag_selection = "tags/*"
timestamp = "/time"
timestamp_format = "2006-01-02T15:04:05.999999999Z"
```
```text
measurement,node=node1 alarm="false",humidity="23",temp="32.3" 1709547032123456000
```

180
docs/PROCESSORS.md Normal file
View file

@ -0,0 +1,180 @@
# Processor Plugins
This section is for developers who want to create a new processor plugin.
## Processor Plugin Guidelines
* A processor must conform to the [telegraf.Processor][] interface.
* Processors should call `processors.Add` in their `init` function to register
themselves. See below for a quick example.
* To be available within Telegraf itself, plugins must register themselves
using a file in `github.com/influxdata/telegraf/plugins/processors/all`
named according to the plugin name. Make sure you also add build-tags to
conditionally build the plugin.
* Each plugin requires a file called `sample.conf` containing the sample
configuration for the plugin in TOML format.
Please consult the [Sample Config][] page for the latest style guidelines.
* Each plugin `README.md` file should include the `sample.conf` file in a
section describing the configuration by specifying a `toml` section in the
form `toml @sample.conf`. The specified file(s) are then injected
automatically into the Readme.
* Follow the recommended [Code Style][].
[Sample Config]: /docs/developers/SAMPLE_CONFIG.md
[Code Style]: /docs/developers/CODE_STYLE.md
[telegraf.Processor]: https://godoc.org/github.com/influxdata/telegraf#Processor
## Streaming Processors
Streaming processors are a new processor type available to you. They are
particularly useful to implement processor types that use background processes
or goroutines to process multiple metrics at the same time. Some examples of
this are the execd processor, which pipes metrics out to an external process
over stdin and reads them back over stdout, and the reverse_dns processor, which
does reverse dns lookups on IP addresses in fields. While both of these come
with a speed cost, it would be significantly worse if you had to process one
metric completely from start to finish before handling the next metric, and thus
they benefit significantly from a streaming-pipe approach.
Some differences from classic Processors:
* Streaming processors must conform to the [telegraf.StreamingProcessor][] interface.
* Processors should call `processors.AddStreaming` in their `init` function to register
themselves. See below for a quick example.
[telegraf.StreamingProcessor]: https://godoc.org/github.com/influxdata/telegraf#StreamingProcessor
## Processor Plugin Example
### Registration
Registration of the plugin on `plugins/processors/all/printer.go`:
```go
//go:build !custom || processors || processors.printer
package all
import _ "github.com/influxdata/telegraf/plugins/processors/printer" // register plugin
```
The _build-tags_ in the first line allow to selectively include/exclude your
plugin when customizing Telegraf.
### Plugin
Content of your plugin file e.g. `printer.go`
```go
//go:generate ../../../tools/readme_config_includer/generator
package printer
import (
_ "embed"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/processors"
)
//go:embed sample.conf
var sampleConfig string
type Printer struct {
Log telegraf.Logger `toml:"-"`
}
func (*Printer) SampleConfig() string {
return sampleConfig
}
// Init is for setup, and validating config.
func (p *Printer) Init() error {
return nil
}
func (p *Printer) Apply(in ...telegraf.Metric) []telegraf.Metric {
for _, metric := range in {
fmt.Println(metric.String())
}
return in
}
func init() {
processors.Add("printer", func() telegraf.Processor {
return &Printer{}
})
}
```
## Streaming Processor Example
```go
//go:generate ../../../tools/readme_config_includer/generator
package printer
import (
_ "embed"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/processors"
)
//go:embed sample.conf
var sampleConfig string
type Printer struct {
Log telegraf.Logger `toml:"-"`
}
func (*Printer) SampleConfig() string {
return sampleConfig
}
// Init is for setup, and validating config.
func (p *Printer) Init() error {
return nil
}
// Start is called once when the plugin starts; it is only called once per
// plugin instance, and never in parallel.
// Start should return once it is ready to receive metrics.
// The passed in accumulator is the same as the one passed to Add(), so you
// can choose to save it in the plugin, or use the one received from Add().
func (p *Printer) Start(acc telegraf.Accumulator) error {
}
// Add is called for each metric to be processed. The Add() function does not
// need to wait for the metric to be processed before returning, and it may
// be acceptable to let background goroutine(s) handle the processing if you
// have slow processing you need to do in parallel.
// Keep in mind Add() should not spawn unbounded goroutines, so you may need
// to use a semaphore or pool of workers (eg: reverse_dns plugin does this).
// Metrics you don't want to pass downstream should have metric.Drop() called,
// rather than simply omitting the acc.AddMetric() call
func (p *Printer) Add(metric telegraf.Metric, acc telegraf.Accumulator) error {
// print!
fmt.Println(metric.String())
// pass the metric downstream, or metric.Drop() it.
// Metric will be dropped if this function returns an error.
acc.AddMetric(metric)
return nil
}
// Stop gives you an opportunity to gracefully shut down the processor.
// Once Stop() is called, Add() will not be called any more. If you are using
// goroutines, you should wait for any in-progress metrics to be processed
// before returning from Stop().
// When stop returns, you should no longer be writing metrics to the
// accumulator.
func (p *Printer) Stop() error {
}
func init() {
processors.AddStreaming("printer", func() telegraf.StreamingProcessor {
return &Printer{}
})
}
```

57
docs/PROFILING.md Normal file
View file

@ -0,0 +1,57 @@
# Profiling
Telegraf uses the standard package `net/http/pprof`. This package serves via
its HTTP server runtime profiling data in the format expected by the pprof
visualization tool.
## Enable profiling
By default, the profiling is turned off. To enable profiling users need to
specify the pprof address config parameter `pprof-addr`. For example:
```shell
telegraf --config telegraf.conf --pprof-addr localhost:6060
```
## Profiles
To view all available profiles, open the URL specified in a browser. For
example, open `http://localhost:6060/debug/pprof/` in your browser.
To look at the heap profile:
```shell
go tool pprof http://localhost:6060/debug/pprof/heap
```
To look at a 30-second CPU profile:
```shell
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30
```
## Generate heap image
It is very helpful to generate an image to visualize what heap memory is used.
It is best to capture an image a few moments after Telegraf starts and then at
additional periods (e.g. 1min, 5min, etc.).
A user can capture the image with Go via:
```shell
go tool pprof -png http://localhost:6060/debug/pprof/heap > heap.png
```
The resulting image can be uploaded to a bug report.
## References
For additional information on pprof see the following:
* [net/http/pprof][]
* [Julia Evans: Profiling Go programs with pprof][]
* [Debugging Go Code][]
[net/http/pprof]: https://pkg.go.dev/net/http/pprof
[julia evans: profiling go programs with pprof]: https://jvns.ca/blog/2017/09/24/profiling-go-with-pprof/
[Debugging Go Code]: https://www.infoq.com/articles/debugging-go-programs-pprof-trace/

68
docs/QUICK_START.md Normal file
View file

@ -0,0 +1,68 @@
# Quick Start
The following demos getting started with Telegraf quickly using Docker to
monitor the local system.
## Install
This example will use Docker to launch a Telegraf container:
```shell
docker pull telegraf
```
Refer to the [Install Guide][] for the full list of ways to install Telegraf.
[Install Guide]: /docs/INSTALL_GUIDE.md
## Configure
Telegraf requires a configuration to start up. A configuration requires at least
one input to collect data from and one output to send data to. The configuration
file is a [TOML][] file.
[TOML]: /docs/TOML.md
```sh
$ cat config.toml
[[inputs.cpu]]
[[inputs.mem]]
[[outputs.file]]
```
The above enables two inputs, CPU and Memory, and one output file. The inputs
will collect usage information about the CPU and Memory, while the file output
is used to print the metrics to STDOUT.
Note that defining plugins to use are a TOML array of tables. This means users
can define a plugin multiple times. This is more useful with other plugins that
may need to connect to different endpoints.
## Launch
With the image downloaded and a config file created, launch the image:
```sh
docker run --rm --volume $PWD/config.toml:/etc/telegraf/telegraf.conf telegraf
```
The user will see some initial information print out about which config file was
loaded, version information, and what plugins were loaded. After the initial few
seconds metrics will start to print out.
## Next steps
To go beyond this quick start, users should consider the following:
1. Determine where you want to collect data or metrics from and look at the
available [input plugins][]
2. Determine where you want to send metrics to and look at the available
[output plugins][]
3. Look at the [install guide][] for the complete list of methods to deploy and
install Telegraf
4. If parsing arbitrary data or sending metrics or logs to Telegraf, read
through the [parsing data][] guide.
[input plugins]: /plugins/inputs
[output plugins]: /plugins/outputs
[parsing data]: /docs/PARSING_DATA.md

89
docs/README.md Normal file
View file

@ -0,0 +1,89 @@
# Telegraf Documentation
* [FAQ][]
* [Install Guide][]
* [Quick Start][]
## Usage
* [Commands and Flags][]
* [Configuration][]
* [Docker][]
* [Windows Service][]
* [Releases][]
* [Supported Platforms][]
## Plugins
* [Aggregators][]
* [External Plugins][]
* [Inputs][]
* [SQL Drivers Input][]
* [Parsers: Input Data Formats][]
* [Outputs][]
* [Secret Stores][]
* [Serializers: Output Data Formats][]
* [Processors][]
## Developers
* [Custom Builds][]
* [Integration Tests][]
* [License of Dependencies][]
* [Nightlies][]
* [Profiling][]
## Reference
* [Aggregators & Processors][]
* [AppArmor][]
* [Metrics][]
* [Parsing Data][]
* [Template Pattern][]
* [TOML][]
* [TLS][]
## Blog Posts
* [Common Expression Language][]
* [Config Recommendations and Performance Monitoring][]
* [Deploying Telegraf via Docker Compose][]
* [Reduce Binary Size][]
* [Storing Secrets][]
[Aggregators & Processors]: /docs/AGGREGATORS_AND_PROCESSORS.md
[Aggregators]: /docs/AGGREGATORS.md
[AppArmor]: /docs/APPARMOR.md
[Commands and Flags]: /docs/COMMANDS_AND_FLAGS.md
[Configuration]: /docs/CONFIGURATION.md
[Custom Builds]: /docs/CUSTOMIZATION.md
[Parsers: Input Data Formats]: /docs/DATA_FORMATS_INPUT.md
[Serializers: Output Data Formats]: /docs/DATA_FORMATS_OUTPUT.md
[Docker]: /docs/DOCKER.md
[External Plugins]: /docs/EXTERNAL_PLUGINS.md
[FAQ]: /docs/FAQ.md
[Inputs]: /docs/INPUTS.md
[Install Guide]: /docs/INSTALL_GUIDE.md
[Integration Tests]: /docs/INTEGRATION_TESTS.md
[License of Dependencies]: /docs/LICENSE_OF_DEPENDENCIES.md
[Metrics]: /docs/METRICS.md
[Nightlies]: /docs/NIGHTLIES.md
[Outputs]: /docs/OUTPUTS.md
[Parsing Data]: /docs/PARSING_DATA.md
[Processors]: /docs/PROCESSORS.md
[Profiling]: /docs/PROFILING.md
[Quick Start]: /docs/QUICK_START.md
[Releases]: /docs/RELEASES.md
[Secret Stores]: /docs/SECRETSTORES.md
[SQL Drivers Input]: /docs/SQL_DRIVERS_INPUT.md
[Supported Platforms]: /docs/SUPPORTED_PLATFORMS.md
[Template Pattern]: /docs/TEMPLATE_PATTERN.md
[TLS]: /docs/TLS.md
[TOML]: /docs/TOML.md
[Windows Service]: /docs/WINDOWS_SERVICE.md
[Config Recommendations and Performance Monitoring]: https://www.influxdata.com/blog/telegraf-best-practices/
[Deploying Telegraf via Docker Compose]: https://www.influxdata.com/blog/telegraf-deployment-strategies-docker-compose/
[Common Expression Language]: https://www.influxdata.com/blog/using-common-expression-language-metric-filtering-telegraf/
[Storing Secrets]: https://www.influxdata.com/blog/storing-secrets-telegraf/
[Reduce Binary Size]: https://www.influxdata.com/blog/how-reduce-telegraf-binary-size/

23
docs/RELEASES.md Normal file
View file

@ -0,0 +1,23 @@
# Releases
Telegraf has four minor releases a year in March, June, September, and
December. In between each of those minor releases, there are 2-4 bug fix
releases that happen every 3 weeks.
This [Google Calendar][] is kept up to date for upcoming release dates.
Additionally, users can look at the [GitHub milestones][] for the next minor
and bug fix releases.
## Versioning
Telegraf uses semantic versioning.
## Minor vs Patch Release
PRs that resolve issues are released in the next release. PRs that introduce
new features are held for the next minor release. Users can view what
[GitHub milestones][] a PR belongs to when they want to determine the release
it will go out with.
[Google Calendar]: https://calendar.google.com/calendar/embed?src=c_03d981cefd8d6432894cb162da5c6186e393bc0f970ca6c371201aa05d30d763%40group.calendar.google.com
[GitHub milestones]: https://github.com/influxdata/telegraf/milestones

115
docs/SECRETSTORES.md Normal file
View file

@ -0,0 +1,115 @@
# Secret Store Plugins
This section is for developers who want to create a new secret store plugin.
## Secret Store Plugin Guidelines
* A secret store must conform to the [telegraf.SecretStore][] interface.
* Secret-stores should call `secretstores.Add` in their `init` function to register
themselves. See below for a quick example.
* To be available within Telegraf itself, plugins must register themselves
using a file in `github.com/influxdata/telegraf/plugins/secretstores/all`
named according to the plugin name. Make sure you also add build-tags to
conditionally build the plugin.
* Each plugin requires a file called `sample.conf` containing the sample
configuration for the plugin in TOML format. Please consult the
[Sample Config][] page for the latest style guidelines.
* Each plugin `README.md` file should include the `sample.conf` file in a
section describing the configuration by specifying a `toml` section in the
form `toml @sample.conf`. The specified file(s) are then injected
automatically into the Readme.
* Follow the recommended [Code Style][].
[telegraf.SecretStore]: https://pkg.go.dev/github.com/influxdata/telegraf?utm_source=godoc#SecretStore
[Sample Config]: https://github.com/influxdata/telegraf/blob/master/docs/developers/SAMPLE_CONFIG.md
[Code Style]: https://github.com/influxdata/telegraf/blob/master/docs/developers/CODE_STYLE.md
## Secret Store Plugin Example
### Registration
Registration of the plugin on `plugins/secretstores/all/printer.go`:
```go
//go:build !custom || secretstores || secretstores.printer
package all
import _ "github.com/influxdata/telegraf/plugins/secretstores/printer" // register plugin
```
The _build-tags_ in the first line allow to selectively include/exclude your
plugin when customizing Telegraf.
### Plugin
```go
//go:generate ../../../tools/readme_config_includer/generator
package main
import (
_ "embed"
"errors"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/secretstores"
)
//go:embed sample.conf
var sampleConfig string
type Printer struct {
Log telegraf.Logger `toml:"-"`
cache map[string]string
}
func (p *Printer) SampleConfig() string {
return sampleConfig
}
func (p *Printer) Init() error {
return nil
}
// Get searches for the given key and return the secret
func (p *Printer) Get(key string) ([]byte, error) {
v, found := p.cache[key]
if !found {
return nil, errors.New("not found")
}
return []byte(v), nil
}
// Set sets the given secret for the given key
func (p *Printer) Set(key, value string) error {
p.cache[key] = value
return nil
}
// List lists all known secret keys
func (p *Printer) List() ([]string, error) {
keys := make([]string, 0, len(p.cache))
for k := range p.cache {
keys = append(keys, k)
}
return keys, nil
}
// GetResolver returns a function to resolve the given key.
func (p *Printer) GetResolver(key string) (telegraf.ResolveFunc, error) {
resolver := func() ([]byte, bool, error) {
s, err := p.Get(key)
return s, false, err
}
return resolver, nil
}
// Register the secret-store on load.
func init() {
secretstores.Add("printer", func(string) telegraf.SecretStore {
return &Printer{}
})
}
```

55
docs/SQL_DRIVERS_INPUT.md Normal file
View file

@ -0,0 +1,55 @@
# Available SQL drivers for the SQL input plugin
This is a list of available drivers for the SQL input plugin. The data-source-name (DSN) is driver specific and
might change between versions. Please check the driver documentation for available options and the format.
| database | driver | aliases | example DSN | comment |
| -------------------- | --------------------------------------------------------- | --------------- |------------------------------------------------------------------------------------------------------------------| --------------------------------------------------------------------------------------------------------------------- |
| ClickHouse | [clickhouse](https://github.com/ClickHouse/clickhouse-go) | | `tcp://host:port[?param1=value&...&paramN=value]"` | see [clickhouse-go docs](https://github.com/ClickHouse/clickhouse-go#dsn) for more information |
| CockroachDB | [cockroach](https://github.com/jackc/pgx) | postgres or pgx | see _postgres_ driver | uses PostgresQL driver |
| FlightSQL | [flightsql](https://github.com/apache/arrow/tree/main/go/arrow/flight/flightsql/driver) | | `flightsql://[username[:password]@]host:port?timeout=10s[&token=TOKEN][&param1=value1&...&paramN=valueN]` | see [driver docs](https://github.com/apache/arrow/blob/main/go/arrow/flight/flightsql/driver/README.md) for more information |
| IBM Netezza | [nzgo](https://github.com/IBM/nzgo) | | `host=your_nz_host port=5480 user=your_nz_user password=your_nz_password dbname=your_nz_db_name sslmode=disable` | see [driver docs](https://pkg.go.dev/github.com/IBM/nzgo/v12) for more |
| MariaDB | [maria](https://github.com/go-sql-driver/mysql) | mysql | see _mysql_ driver | uses MySQL driver |
| Microsoft SQL Server | [sqlserver](https://github.com/microsoft/go-mssqldb) | mssql | `sqlserver://username:password@host/instance?param1=value&param2=value` | uses newer _sqlserver_ driver |
| MySQL | [mysql](https://github.com/go-sql-driver/mysql) | | `[username[:password]@][protocol[(address)]]/dbname[?param1=value1&...&paramN=valueN]` | see [driver docs](https://github.com/go-sql-driver/mysql) for more information |
| Oracle | [oracle](https://github.com/sijms/go-ora) | oracle | `oracle://username:password@host:port/service?param1=value&param2=value` | see [driver docs](https://github.com/sijms/go-ora/blob/master/README.md) for more information |
| PostgreSQL | [postgres](https://github.com/jackc/pgx) | pgx | `postgresql://[user[:password]@][netloc][:port][,...][/dbname][?param1=value1&...]` | see [postgres docs](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING) for more information |
| SAP HANA | [go-hdb](https://github.com/SAP/go-hdb) | hana | `hdb://user:password@host:port` | see [driver docs](https://github.com/SAP/go-hdb) for more information |
| SQLite | [sqlite](https://gitlab.com/cznic/sqlite) | | `filename` | see [driver docs](https://pkg.go.dev/modernc.org/sqlite) for more information |
| TiDB | [tidb](https://github.com/go-sql-driver/mysql) | mysql | see _mysql_ driver | uses MySQL driver |
## Comments
### Driver aliases
Some database drivers are supported though another driver (e.g. CockroachDB). For other databases we provide a more
obvious name (e.g. postgres) compared to the driver name. For all of those drivers you might use an _alias_ name
during configuration.
### Example data-source-name DSN
The given examples are just that, so please check the driver documentation for the exact format
and available options and parameters. Please note that the format of a DSN might also change
between driver version.
### Type conversions
Telegraf relies on type conversion of the database driver and/or the golang sql framework. In case you find
any problem, please open an issue!
## Help
If nothing seems to work, you might find help in the telegraf forum or in the chat.
### The documentation is wrong
Please open an issue or even better send a pull-request!
### I found a bug
Please open an issue or even better send a pull-request!
### My database is not supported
We currently cannot support CGO drivers in telegraf! Please check if a **pure Go** driver for the [golang sql framework](https://golang.org/pkg/database/sql/) exists.
If you found such a driver, please let us know by opening an issue or even better by sending a pull-request!

View file

@ -0,0 +1,60 @@
# Supported Platforms
This doc helps define the platform support for Telegraf. See the
[install guide][] for specific options for installing Telegraf.
Bug reports should be submitted only for supported platforms that are under
general support, not extended or paid support. In general, Telegraf supports
Linux, macOS, Microsoft Windows, and FreeBSD.
Telegraf is written in Go, which supports many operating systems. Golang.org
has a [table][go-table] of valid OS and architecture combinations and the Go
Wiki has more specific [minimum requirements][go-reqs] for Go itself. Telegraf
may work and produce builds for other operating systems and users are welcome to
build their own binaries for them. Again, bug reports must be made on a
supported platform.
[install guide]: /docs/INSTALL_GUIDE.md
[go-table]: https://golang.org/doc/install/source#environment
[go-reqs]: https://github.com/golang/go/wiki/MinimumRequirements#operating-systems
## FreeBSD
Telegraf supports releases under FreeBSD security support. See the
[FreeBSD security page][] for specific versions.
[FreeBSD security page]: https://www.freebsd.org/security/#sup
## Linux
Telegraf will support the latest generally supported versions of major linux
distributions. This does not include extended supported releases where customers
can pay for additional support.
Below are some of the major distributions and the intent to support:
* [Debian][]: Releases supported by security and release teams
* [Fedora][]: Releases currently supported by Fedora team
* [Red Hat Enterprise Linux][]: Releases under full support
* [Ubuntu][]: Releases, interim and LTS, releases in standard support
[Debian]: https://wiki.debian.org/LTS
[Fedora]: https://fedoraproject.org/wiki/Releases
[Red Hat Enterprise Linux]: https://access.redhat.com/support/policy/updates/errata#Life_Cycle_Dates
[Ubuntu]: https://ubuntu.com/about/release-cycle
## macOS
Telegraf supports releases supported by Apple. Release history is available from
[wikipedia][wp-macos].
[wp-macos]: https://endoflife.date/macos
## Microsoft Windows
Telegraf intends to support current versions of [Windows][] and
[Windows Server][]. The release must be under mainstream or generally supported
and not under any paid or extended security support.
[Windows]: https://learn.microsoft.com/en-us/lifecycle/faq/windows
[Windows Server]: https://learn.microsoft.com/en-us/windows-server/get-started/windows-server-release-info

137
docs/TEMPLATE_PATTERN.md Normal file
View file

@ -0,0 +1,137 @@
# Template Patterns
Template patterns are a mini language that describes how a dot delimited
string should be mapped to and from [metrics][].
A template has the form:
```text
"host.mytag.mytag.measurement.measurement.field*"
```
Where the following keywords can be set:
1. `measurement`: specifies that this section of the graphite bucket corresponds
to the measurement name. This can be specified multiple times.
2. `field`: specifies that this section of the graphite bucket corresponds
to the field name. This can be specified multiple times.
3. `measurement*`: specifies that all remaining elements of the graphite bucket
correspond to the measurement name.
4. `field*`: specifies that all remaining elements of the graphite bucket
correspond to the field name.
Any part of the template that is not a keyword is treated as a tag key. This
can also be specified multiple times.
**NOTE:** `measurement` must be specified in your template.
**NOTE:** `field*` cannot be used in conjunction with `measurement*`.
## Examples
### Measurement & Tag Templates
The most basic template is to specify a single transformation to apply to all
incoming metrics. So the following template:
```toml
templates = [
"region.region.measurement*"
]
```
would result in the following Graphite -> Telegraf transformation.
```text
us.west.cpu.load 100
=> cpu.load,region=us.west value=100
```
Multiple templates can also be specified, but these should be differentiated
using _filters_ (see below for more details)
```toml
templates = [
"*.*.* region.region.measurement", # <- all 3-part measurements will match this one.
"*.*.*.* region.region.host.measurement", # <- all 4-part measurements will match this one.
]
```
### Field Templates
The field keyword tells Telegraf to give the metric that field name.
So the following template:
```toml
separator = "_"
templates = [
"measurement.measurement.field.field.region"
]
```
would result in the following Graphite -> Telegraf transformation.
```text
cpu.usage.idle.percent.eu-east 100
=> cpu_usage,region=eu-east idle_percent=100
```
The field key can also be derived from all remaining elements of the graphite
bucket by specifying `field*`:
```toml
separator = "_"
templates = [
"measurement.measurement.region.field*"
]
```
which would result in the following Graphite -> Telegraf transformation.
```text
cpu.usage.eu-east.idle.percentage 100
=> cpu_usage,region=eu-east idle_percentage=100
```
### Filter Templates
Users can also filter the template(s) to use based on the name of the bucket,
using glob matching, like so:
```toml
templates = [
"cpu.* measurement.measurement.region",
"mem.* measurement.measurement.host"
]
```
which would result in the following transformation:
```text
cpu.load.eu-east 100
=> cpu_load,region=eu-east value=100
mem.cached.localhost 256
=> mem_cached,host=localhost value=256
```
### Adding Tags
Additional tags can be added to a metric that don't exist on the received metric.
You can add additional tags by specifying them after the pattern.
Tags have the same format as the line protocol.
Multiple tags are separated by commas.
```toml
templates = [
"measurement.measurement.field.region datacenter=1a"
]
```
would result in the following Graphite -> Telegraf transformation.
```text
cpu.usage.idle.eu-east 100
=> cpu_usage,region=eu-east,datacenter=1a idle=100
```
[metrics]: /docs/METRICS.md

126
docs/TLS.md Normal file
View file

@ -0,0 +1,126 @@
# Transport Layer Security
There is an ongoing effort to standardize TLS options across plugins. When
possible, plugins will provide the standard settings described below. With the
exception of the advanced configuration available TLS settings will be
documented in the sample configuration.
## Client Configuration
For client TLS support we have the following options:
```toml
## Enable/disable TLS
## Set to true/false to enforce TLS being enabled/disabled. If not set,
## enable TLS only if any of the other options are specified.
# tls_enable =
## Root certificates for verifying server certificates encoded in PEM format.
# tls_ca = "/etc/telegraf/ca.pem"
## The public and private key pairs for the client encoded in PEM format. May
## contain intermediate certificates.
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
# passphrase for encrypted private key, if it is in PKCS#8 format. Encrypted PKCS#1 private keys are not supported.
# tls_key_pwd = "changeme"
## Skip TLS verification.
# insecure_skip_verify = false
## Send the specified TLS server name via SNI.
# tls_server_name = "foo.example.com"
#
```
### Server Configuration
The server TLS configuration provides support for TLS mutual authentication:
```toml
## Set one or more allowed client CA certificate file names to
## enable mutually authenticated TLS connections.
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
## Set one or more allowed DNS name to enable a whitelist
## to verify incoming client certificates.
## It will go through all available SAN in the certificate,
## if of them matches the request is accepted.
# tls_allowed_dns_names = ["client.example.org"]
## Add service certificate and key.
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
# passphrase for encrypted private key, if it is in PKCS#8 format. Encrypted PKCS#1 private keys are not supported.
# tls_key_pwd = "changeme"
```
#### Advanced Configuration
For plugins using the standard server configuration you can also set several
advanced settings. These options are not included in the sample configuration
for the interest of brevity.
```toml
## Define list of allowed ciphers suites. If not defined the default ciphers
## supported by Go will be used.
## ex: tls_cipher_suites = [
## "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
## "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
## "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
## "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
## "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
## "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
## "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
## "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
## "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
## "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
## "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
## "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
## "TLS_RSA_WITH_AES_128_GCM_SHA256",
## "TLS_RSA_WITH_AES_256_GCM_SHA384",
## "TLS_RSA_WITH_AES_128_CBC_SHA256",
## "TLS_RSA_WITH_AES_128_CBC_SHA",
## "TLS_RSA_WITH_AES_256_CBC_SHA"
## ]
# tls_cipher_suites = []
## Minimum TLS version that is acceptable.
# tls_min_version = "TLS10"
## Maximum SSL/TLS version that is acceptable.
# tls_max_version = "TLS13"
```
Cipher suites for use with `tls_cipher_suites`:
- `TLS_RSA_WITH_RC4_128_SHA`
- `TLS_RSA_WITH_3DES_EDE_CBC_SHA`
- `TLS_RSA_WITH_AES_128_CBC_SHA`
- `TLS_RSA_WITH_AES_256_CBC_SHA`
- `TLS_RSA_WITH_AES_128_CBC_SHA256`
- `TLS_RSA_WITH_AES_128_GCM_SHA256`
- `TLS_RSA_WITH_AES_256_GCM_SHA384`
- `TLS_ECDHE_ECDSA_WITH_RC4_128_SHA`
- `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA`
- `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA`
- `TLS_ECDHE_RSA_WITH_RC4_128_SHA`
- `TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA`
- `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`
- `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`
- `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`
- `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`
- `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`
- `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`
- `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`
- `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`
- `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305`
- `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`
- `TLS_AES_128_GCM_SHA256`
- `TLS_AES_256_GCM_SHA384`
- `TLS_CHACHA20_POLY1305_SHA256`
TLS versions for use with `tls_min_version` or `tls_max_version`:
- `TLS10`
- `TLS11`
- `TLS12`
- `TLS13`

100
docs/TOML.md Normal file
View file

@ -0,0 +1,100 @@
# TOML
Telegraf uses TOML as the configuration language. The following outlines a few
common questions and issues that cause questions or confusion.
## Reference and Validator
For all things TOML related, please consult the [TOML Spec][] and consider
using a TOML validator. In VSCode the [Even Better TOML][] extension or use the
[TOML Lint][] website to validate your TOML config.
[TOML Spec]: https://toml.io/en/v1.0.0
[Even Better TOML]: https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml
[TOML Lint]: https://www.toml-lint.com/
## Multiple TOML Files
TOML technically does not support multiple files, this is done as a convenience for
users.
Users should be aware that when Telegraf reads a user's config, if multiple
files or directories are read in, each file at a time and all
settings are combined as if it were one big file.
## Single Table vs Array of Tables
Telegraf uses a single agent table (e.g. `[agent]`) to control high-level agent
specific configurations. This section can only be defined once for all config
files and should be in the first file read in to take effect. This cannot be
defined per-config file.
Telegraf also uses array of tables (e.g. `[[inputs.file]]`) to define multiple
plugins. These can be specified as many times as a user wishes.
## In-line Table vs Table
In some cases, a configuration option for a plugin may define a table of
configuration options. Take for example, the ability to add arbitrary tags to
an input plugin:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
User's should understand that these tables *must* be at the end of the plugin
definition, because any key-value pair is assumed to be part of that table. The
following demonstrates how this can cause confusion:
```toml
[[inputs.cpu]]
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
percpu = false # this is treated as a tag to add, not a config option
```
Note TOML does not care about how a user indents the config or whitespace, so
the `percpu` option is considered a tag.
A far better approach to avoid this situation is to use inline table syntax:
```toml
[[inputs.cpu]]
tags = {tag1 = "foo", tag2 = "bar"}
percpu = false
totalcpu = true
```
This way the tags value can go anywhere in the config and avoids possible
confusion.
## Basic String vs String Literal
In basic strings, signified by double-quotes, certain characters like the
backslash and double quote contained in a basic string need to be escaped for
the string to be valid.
For example the following invalid TOML, includes a Windows path with
unescaped backslashes:
```toml
path = "C:\Program Files\" # this is invalid TOML
```
User's can either escape the backslashes or use a literal string, which is
signified by single-quotes:
```toml
path = "C:\\Program Files\\"
path = 'C:\Program Files\'
```
Literal strings return exactly what you type. As there is no escaping in literal
strings you cannot have an apostrophe in a literal string.

129
docs/WINDOWS_SERVICE.md Normal file
View file

@ -0,0 +1,129 @@
# Running Telegraf as a Windows Service
Telegraf natively supports running as a Windows Service. Outlined below is are
the general steps to set it up.
1. Obtain the telegraf windows distribution
2. Create the directory `C:\Program Files\Telegraf` or use a custom directory
if desired
3. Place the telegraf.exe and the telegraf.conf config file into the directory,
either `C:\Program Files\Telegraf` or the custom directory of your choice.
If you install in a different location simply specify the `--config`
parameter with the desired location.
4. To install the service into the Windows Service Manager, run the command
as administrator. Make sure to wrap parameters containing spaces in double
quotes:
```shell
> "C:\Program Files\Telegraf\telegraf.exe" service install
```
5. Edit the configuration file to meet your needs
6. To check that it works, run:
```shell
> "C:\Program Files\Telegraf\telegraf.exe" --config "C:\Program Files\Telegraf\telegraf.conf" --test
```
7. To start collecting data, run:
```shell
> net start telegraf
```
or
```shell
> "C:\Program Files\Telegraf\telegraf.exe" service start
```
or use the Windows service manager to start the service
Please also check the Windows event log or your configured log-file for errors
during startup.
## Config Directory
You can also specify a `--config-directory` for the service to use:
1. Create a directory for config snippets: `C:\Program Files\Telegraf\telegraf.d`
2. Include the `--config-directory` option when registering the service:
```shell
> "C:\Program Files\Telegraf\telegraf.exe" --config C:\"Program Files"\Telegraf\telegraf.conf --config-directory C:\"Program Files"\Telegraf\telegraf.d service install
```
## Other supported operations
Telegraf can manage its own service through the --service flag:
| Command | Effect |
|----------------------------------|------------------------------------------|
| `telegraf.exe service install` | Install telegraf as a service |
| `telegraf.exe service uninstall` | Remove the telegraf service |
| `telegraf.exe service start` | Start the telegraf service |
| `telegraf.exe service stop` | Stop the telegraf service |
| `telegraf.exe service status` | Query the status of the telegraf service |
## Install multiple services
Running multiple instances of Telegraf is seldom needed, as you can run
multiple instances of each plugin and route metric flow using the metric
filtering options. However, if you do need to run multiple telegraf instances
on a single system, you can install the service with the `--service-name` and
`--display-name` flags to give the services unique names:
```shell
> "C:\Program Files\Telegraf\telegraf.exe" --service-name telegraf-1 service install --display-name "Telegraf 1"
> "C:\Program Files\Telegraf\telegraf.exe" --service-name telegraf-2 service install --display-name "Telegraf 2"
```
## Auto restart and restart delay
By default the service will not automatically restart on failure. Providing the
`--auto-restart` flag during installation will always restart the service with
a default delay of 5 minutes. To modify this to for example 3 minutes,
additionally provide `--restart-delay 3m` flag. The delay can be any valid
`time.Duration` string.
## Troubleshooting
When Telegraf runs as a Windows service, Telegraf logs all messages concerning
the service startup to the Windows event log. All messages and errors occuring
during runtime will be logged to the log-target you configured.
Check the event log for errors reported by the `telegraf` service (or the
service-name you configured) during service startup:
`Event Viewer -> Windows Logs -> Application`
### Common error #1067
When installing as service in Windows, always double check to specify full path
of the config file, otherwise windows service will fail to start. Use
```shell
> "C:\Program Files\Telegraf\telegraf.exe" --config "C:\MyConfigs\telegraf.conf" service install
```
instead of
```shell
> "C:\Program Files\Telegraf\telegraf.exe" --config "telegraf.conf" service install
```
### Service is killed during shutdown
When shuting down Windows the Telegraf service tries to cleanly stop when
receiving the corresponding notification from the Windows service manager. The
exit process involves stopping all inputs, processors and aggregators and
finally to flush all remaining metrics to the output(s). In case many metrics
are not yet flushed this final step might take some time. However, Windows will
kill the service and the corresponding process after a predefined timeout
(usually 5 seconds).
You can change that timeout in the registry under
````text
HKLM\SYSTEM\CurrentControlSet\Control\WaitToKillServiceTimeout
```
**NOTE:** The value is in milliseconds and applies to **all** services!

View file

@ -0,0 +1,8 @@
# Code Style
Code is required to be formatted using `gofmt`, this covers most code style
requirements. It is also highly recommended to use `goimports` to
automatically order imports.
Please try to keep lines length under 80 characters, the exact number of
characters is not strict but it generally helps with readability.

84
docs/developers/DEBUG.md Normal file
View file

@ -0,0 +1,84 @@
# Debug
The following describes how to use the [delve][1] debugger with telegraf
during development. Delve has many, very well documented [subcommands][2] and
options.
[1]: https://github.com/go-delve/delve
[2]: https://github.com/go-delve/delve/blob/master/Documentation/usage/README.md
## CLI
To run telegraf manually, users can run:
```bash
go run ./cmd/telegraf --config config.toml
```
To attach delve with a similar config users can run the following. Note the
additional `--` to specify flags passed to telegraf. Additional flags need to
go after this double dash:
```bash
$ dlv debug ./cmd/telegraf -- --config config.toml
Type 'help' for list of commands.
(dlv)
```
At this point a user could set breakpoints and continue execution.
## Visual Studio Code
Visual Studio Code's [go language extension][20] includes the ability to easily
make use of [delve for debugging][21]. Check out this [full tutorial][22] from
the go extension's wiki.
A basic config is all that is required along with additional arguments to tell
Telegraf where the config is located:
```json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Launch Package",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${fileDirname}",
"args": ["--config", "/path/to/config"]
}
]
}
```
[20]: https://code.visualstudio.com/docs/languages/go
[21]: https://code.visualstudio.com/docs/languages/go#_debugging
[22]: https://github.com/golang/vscode-go/wiki/debugging
## GoLand
JetBrains' [GoLand][30] also includes full featured [debugging][31] options.
The following is an example debug config to run Telegraf with a config:
```xml
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="build &amp; run" type="GoApplicationRunConfiguration" factoryName="Go Application">
<module name="telegraf" />
<working_directory value="$PROJECT_DIR$" />
<parameters value="--config telegraf.conf" />
<kind value="DIRECTORY" />
<package value="github.com/influxdata/telegraf" />
<directory value="$PROJECT_DIR$/cmd/telegraf" />
<filePath value="$PROJECT_DIR$" />
<method v="2" />
</configuration>
</component>
```
[30]: https://www.jetbrains.com/go/
[31]: https://www.jetbrains.com/help/go/debugging-code.html

View file

@ -0,0 +1,93 @@
# Deprecation
Deprecation is the primary tool for making changes in Telegraf. A deprecation
indicates that the community should move away from using a feature, and
documents that the feature will be removed in the next major update (2.0).
Key to deprecation is that the feature remains in Telegraf and the behavior is
not changed.
We do not have a strict definition of a breaking change. All code changes
change behavior, the decision to deprecate or make the change immediately is
decided based on the impact.
## Deprecate plugins
Add an entry to the plugins deprecation list (e.g. in `plugins/inputs/deprecations.go`). Include the deprecation version
and any replacement, e.g.
```golang
"logparser": {
Since: "1.15.0",
Notice: "use 'inputs.tail' with 'grok' data format instead",
},
```
The entry can contain an optional `RemovalIn` field specifying the planned version for removal of the plugin.
Also add the deprecation warning to the plugin's README:
```markdown
# Logparser Input Plugin
### **Deprecated in 1.10**: Please use the [tail][] plugin along with the
`grok` [data format][].
[tail]: /plugins/inputs/tail/README.md
[data formats]: /docs/DATA_FORMATS_INPUT.md
```
Telegraf will automatically check if a deprecated plugin is configured and print a warning
```text
2022-01-26T20:08:15Z W! DeprecationWarning: Plugin "inputs.logparser" deprecated since version 1.15.0 and will be removed in 2.0.0: use 'inputs.tail' with 'grok' data format instead
```
## Deprecate options
Mark the option as deprecated in the sample config, include the deprecation
version and any replacement.
```toml
## Broker to publish to.
## deprecated in 1.7; use the brokers option
# url = "amqp://localhost:5672/influxdb"
```
In the plugins configuration struct, add a `deprecated` tag to the option:
```go
type AMQP struct {
URL string `toml:"url" deprecated:"1.7.0;use 'brokers' instead"`
Precision string `toml:"precision" deprecated:"1.2.0;option is ignored"`
}
```
The `deprecated` tag has the format `<since version>[;removal version];<notice>` where the `removal version` is optional. The specified deprecation info will automatically displayed by Telegraf if the option is used in the config
```text
2022-01-26T20:08:15Z W! DeprecationWarning: Option "url" of plugin "outputs.amqp" deprecated since version 1.7.0 and will be removed in 2.0.0: use 'brokers' instead
```
### Option value
In the case a specific option value is being deprecated, the method `models.PrintOptionValueDeprecationNotice` needs to be called in the plugin's `Init` method.
## Deprecate metrics
In the README document the metric as deprecated. If there is a replacement field,
tag, or measurement then mention it.
```markdown
- system
- fields:
- uptime_format (string, deprecated in 1.10: use `uptime` field)
```
Add filtering to the sample config, leave it commented out.
```toml
[[inputs.system]]
## Uncomment to remove deprecated metrics.
# fieldexclude = ["uptime_format"]
```

View file

@ -0,0 +1,79 @@
# Logging
## Plugin Logging
You can access the Logger for a plugin by defining a field named `Log`. This
`Logger` is configured internally with the plugin name and alias so they do not
need to be specified for each log call.
```go
type MyPlugin struct {
Log telegraf.Logger `toml:"-"`
}
```
You can then use this Logger in the plugin. Use the method corresponding to
the log level of the message.
```go
p.Log.Errorf("Unable to write to file: %v", err)
```
## Agent Logging
In other sections of the code it is required to add the log level and module
manually:
```go
log.Printf("E! [agent] Error writing to %s: %v", output.LogName(), err)
```
## When to Log
Log a message if an error occurs but the plugin can continue working. For
example if the plugin handles several servers and only one of them has a fatal
error, it can be logged as an error.
Use logging judiciously for debug purposes. Since Telegraf does not currently
support setting the log level on a per module basis, it is especially important
to not over do it with debug logging.
If the plugin is listening on a socket, log a message with the address of the socket:
```go
p.log.InfoF("Listening on %s://%s", protocol, l.Addr())
```
## When not to Log
Don't use logging to emit performance data or other meta data about the plugin,
instead use the `internal` plugin and the `selfstats` package.
Don't log fatal errors in the plugin that require the plugin to return, instead
return them from the function and Telegraf will handle the logging.
Don't log for static configuration errors, check for them in a plugin `Init()`
function and return an error there.
Don't log a warning every time a plugin is called for situations that are
normal on some systems.
## Log Level
The log level is indicated by a single character at the start of the log
message. Adding this prefix is not required when using the Plugin Logger.
- `D!` Debug
- `I!` Info
- `W!` Warning
- `E!` Error
## Style
Log messages should be capitalized and be a single line.
If it includes data received from another system or process, such as the text
of an error message, the text should be quoted with `%q`.
Use the `%v` format for the Go error type instead of `%s` to ensure a nil error
is printed.

View file

@ -0,0 +1,49 @@
# Metric Format Changes
When making changes to an existing input plugin, care must be taken not to change the metric format in ways that will cause trouble for existing users. This document helps developers understand how to make metric format changes safely.
## Changes can cause incompatibilities
If the metric format changes, data collected in the new format can be incompatible with data in the old format. Database queries designed around the old format may not work with the new format. This can cause application failures.
Some metric format changes don't cause incompatibilities. Also, some unsafe changes are necessary. How do you know what changes are safe and what to do if your change isn't safe?
## Guidelines
The main guideline is just to keep compatibility in mind when making changes. Often developers are focused on making a change that fixes their particular problem and they forget that many people use the existing code and will upgrade. When you're coding, keep existing users and applications in mind.
### Renaming, removing, reusing
Database queries refer to the metric and its tags and fields by name. Any Telegraf code change that changes those names has the potential to break an existing query. Similarly, removing tags or fields can break queries.
Changing the meaning of an existing tag value or field value or reusing an existing one in a new way isn't safe. Although queries that use these tags/field may not break, they will not work as they did before the change.
Adding a field doesn't break existing queries. Queries that select all fields and/or tags (like "select * from") will return an extra series but this is often useful.
### Performance and storage
Time series databases can store large amounts of data but many of them don't perform well on high cardinality data. If a metric format change includes a new tag that holds high cardinality data, database performance could be reduced enough to cause existing applications not to work as they previously did. Metric format changes that dramatically increase the number of tags or fields of a metric can increase database storage requirements unexpectedly. Both of these types of changes are unsafe.
### Make unsafe changes opt-in
If your change has the potential to seriously affect existing users, the change must be opt-in. To do this, add a plugin configuration setting that lets the user select the metric format. Make the setting's default value select the old metric format. When new users add the plugin they can choose the new format and get its benefits. When existing users upgrade, their config files won't have the new setting so the default will ensure that there is no change.
When adding a setting, avoid using a boolean and consider instead a string or int for future flexibility. A boolean can only handle two formats but a string can handle many. For example, compare use_new_format=true and features=["enable_foo_fields"]; the latter is much easier to extend and still very descriptive.
If you want to encourage existing users to use the new format you can log a warning once on startup when the old format is selected. The warning should tell users in a gentle way that they can upgrade to a better metric format. If it doesn't make sense to maintain multiple metric formats forever, you can change the default on a major release or even remove the old format completely. See [[Deprecation]] for details.
### Utility
Changes should be useful to many or most users. A change that is only useful for a small number of users may not accepted, even if it's off by default.
## Summary table
| | delete | rename | add |
| ------- | ------ | ------ | --- |
| metric | unsafe | unsafe | safe |
| tag | unsafe | unsafe | be careful with cardinality |
| field | unsafe | unsafe | ok as long as it's useful for existing users and is worth the added space |
## References
InfluxDB Documentation: "Schema and data layout"

View file

@ -0,0 +1,70 @@
# Packaging
Building the packages for Telegraf is automated using [Make](https://en.wikipedia.org/wiki/Make_(software)). Just running `make` will build a Telegraf binary for the operating system and architecture you are using (if it is supported). If you need to build a different package then you can run `make package` which will build all the supported packages. You will most likely only want a subset, you can define a subset of packages to be built by overriding the `include_packages` variable like so `make package include_packages="amd64.deb"`. You can also build all packages for a specific architecture like so `make package include_packages="$(make amd64)"`.
The packaging steps require certain tools to be setup before hand to work. These dependencies are listed in the ci.docker file which you can find in the scripts directory. Therefore it is recommended to use Docker to build the artifacts, see more details below.
## Go Version
Telegraf will be built using the latest version of Go whenever possible.
### Update CI image
Incrementing the version is maintained by the core Telegraf team because it requires access to an internal docker repository that hosts the docker CI images. When a new version is released, the following process is followed:
1. Within the `Makefile`, `.circleci\config.yml`, and `scripts/ci.docker` files
update the Go versions to the new version number
2. Run `make ci`, this requires quay.io internal permissions
3. The files `scripts\installgo_linux.sh`, `scripts\installgo_mac.sh`, and
`scripts\installgo_windows.sh` need to be updated as well with the new Go
version and SHA
4. Create a pull request with these new changes, and verify the CI passes and
uses the new docker image
See the [previous PRs](https://github.com/influxdata/telegraf/search?q=chore+update+go&type=commits) as examples.
### Access to quay.io
A member of the team needs to invite you to the quay.io organization.
To push new images, the user needs to do the following:
1. Create a password if the user logged in using Google authentication
2. Download an encrypted username/password from the quay.io user page
3. Run `docker login quay.io` and enter in the encrypted username and password
from the previous step
## Package using Docker
This packaging method uses the CI images, and is very similar to how the
official packages are created on release. This is the recommended method for
building the rpm/deb as it is less system dependent.
Pull the CI images from quay, the version corresponds to the version of Go
that is used to build the binary:
```shell
docker pull quay.io/influxdb/telegraf-ci:1.9.7
```
Start a shell in the container:
```shell
docker run -ti quay.io/influxdb/telegraf-ci:1.9.7 /bin/bash
```
From within the container:
1. `go get -d github.com/influxdata/telegraf`
2. `cd /go/src/github.com/influxdata/telegraf`
3. `git checkout release-1.10`
* Replace tag `release-1.10` with the version of Telegraf you would like to build
4. `git reset --hard 1.10.2`
5. `make deps`
6. `make package include_packages="amd64.deb"`
* Change `include_packages` to change what package you want, run `make help` to see possible values
From the host system, copy the build artifacts out of the container:
```shell
docker cp romantic_ptolemy:/go/src/github.com/influxdata/telegraf/build/telegraf-1.10.2-1.x86_64.rpm .
```

View file

@ -0,0 +1,66 @@
# Profiling
This article describes how to collect performance traces and memory profiles
from Telegraf. If you are submitting this for an issue, please include the
version.txt generated below.
Use the `--pprof-addr` option to enable the profiler, the easiest way to do
this may be to add this line to `/etc/default/telegraf`:
```shell
TELEGRAF_OPTS="--pprof-addr localhost:6060"
```
Restart Telegraf to activate the profile address.
## Trace Profile
Collect a trace during the time where the performance issue is occurring. This
example collects a 10 second trace and runs for 10 seconds:
```shell
curl 'http://localhost:6060/debug/pprof/trace?seconds=10' > trace.bin
telegraf --version > version.txt
go env GOOS GOARCH >> version.txt
```
The `trace.bin` and `version.txt` files can be sent in for analysis or, if desired, you can
analyze the trace with:
```shell
go tool trace trace.bin
```
## Memory Profile
Collect a heap memory profile:
```shell
curl 'http://localhost:6060/debug/pprof/heap' > mem.prof
telegraf --version > version.txt
go env GOOS GOARCH >> version.txt
```
Analyze:
```shell
$ go tool pprof mem.prof
(pprof) top5
```
## CPU Profile
Collect a 30s CPU profile:
```shell
curl 'http://localhost:6060/debug/pprof/profile' > cpu.prof
telegraf --version > version.txt
go env GOOS GOARCH >> version.txt
```
Analyze:
```shell
go tool pprof cpu.prof
(pprof) top5
```

1
docs/developers/README.md Symbolic link
View file

@ -0,0 +1 @@
../../CONTRIBUTING.md

185
docs/developers/REVIEWS.md Normal file
View file

@ -0,0 +1,185 @@
# Reviews
Pull-requests require two approvals before being merged. Expect several rounds of back and forth on
reviews, non-trivial changes are rarely accepted on the first pass. It might take some time
until you see a first review so please be patient.
All pull requests should follow the style and best practices in the
[CONTRIBUTING.md](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md)
document.
## Process
The review process is roughly structured as follows:
1. Submit a pull request.
Please check that you signed the [CLA](https://www.influxdata.com/legal/cla/) (and [Corporate CLA](https://www.influxdata.com/legal/ccla/) if you are contributing code on as an employee of your company). Provide a short description of your submission and reference issues that you potentially close. Make sure the CI tests are all green and there are no linter-issues.
1. Get feedback from a first reviewer and a `ready for final review` tag.
Please constructively work with the reviewer to get your code into a mergeable state (see also [below](#reviewing-plugin-code)).
1. Get a final review by one of the InfluxData maintainers.
Please fix any issue raised.
1. Wait for the pull-request to be merged.
It might take some time until your PR gets merged, depending on the release cycle and the type of
your pull-request (bugfix, enhancement of existing code, new plugin, etc). Remember, it might be necessary to rebase your code before merge to resolve conflicts.
Please read the review comments carefully, fix the related part of the code and/or respond in case there is anything unclear. Maintainers will add the `waiting for response` tag to PRs to make it clear we are waiting on the submitter for updates. __Once the tag is added, if there is no activity on a pull request or the contributor does not respond, our bot will automatically close the PR after two weeks!__ If you expect a longer period of inactivity or you want to abandon a pull request, please let us know.
In case you still want to continue with the PR, feel free to reopen it.
## Reviewing Plugin Code
- Avoid variables scoped to the package. Everything should be scoped to the plugin struct, since multiple instances of the same plugin are allowed and package-level variables will cause race conditions.
- SampleConfig must match the readme, but not include the plugin name.
- structs should include toml tags for fields that are expected to be editable from the config. eg `toml:"command"` (snake_case)
- plugins that want to log should declare the Telegraf logger, not use the log package. eg:
```Go
Log telegraf.Logger `toml:"-"`
```
(in tests, you can do `myPlugin.Log = testutil.Logger{}`)
- Initialization and config checking should be done on the `Init() error` function, not in the Connect, Gather, or Start functions.
- `Init() error` should not contain connections to external services. If anything fails in Init, Telegraf will consider it a configuration error and refuse to start.
- plugins should avoid synchronization code if they are not starting goroutines. Plugin functions are never called in parallel.
- avoid goroutines when you don't need them and removing them would simplify the code
- errors should almost always be checked.
- avoid boolean fields when a string or enumerated type would be better for future extension. Lots of boolean fields also make the code difficult to maintain.
- use config.Duration instead of internal.Duration
- compose tls.ClientConfig as opposed to specifying all the TLS fields manually
- http.Client should be declared once on `Init() error` and reused, (or better yet, on the package if there's no client-specific configuration). http.Client has built-in concurrency protection and reuses connections transparently when possible.
- avoid doing network calls in loops where possible, as this has a large performance cost. This isn't always possible to avoid.
- when processing batches of records with multiple network requests (some outputs that need to partition writes do this), return an error when you want the whole batch to be retried, log the error when you want the batch to continue without the record
- consider using the StreamingProcessor interface instead of the (legacy) Processor interface
- avoid network calls in processors when at all possible. If it's necessary, it's possible, but complicated (see processor.reversedns).
- avoid dependencies when:
- they require cgo
- they pull in massive projects instead of small libraries
- they could be replaced by a simple http call
- they seem unnecessary, superfluous, or gratuitous
- consider adding build tags if plugins have OS-specific considerations
- use the right logger log levels so that Telegraf is normally quiet eg `plugin.Log.Debugf()` only shows up when running Telegraf with `--debug`
- consistent field types: dynamically setting the type of a field should be strongly avoided as it causes problems that are difficult to solve later, made worse by having to worry about backwards compatibility in future changes. For example, if an numeric value comes from a string field and it is not clear if the field can sometimes be a float, the author should pick either a float or an int, and parse that field consistently every time. Better to sometimes truncate a float, or to always store ints as floats, rather than changing the field type, which causes downstream problems with output databases.
- backwards compatibility: We work hard not to break existing configurations during new changes. Upgrading Telegraf should be a seamless transition. Possible tools to make this transition smooth are:
- enumerable type fields that allow you to customize behavior (avoid boolean feature flags)
- version fields that can be used to opt in to newer changed behavior without breaking old (see inputs.mysql for example)
- a new version of the plugin if it has changed significantly (eg outputs.influxdb and outputs.influxdb_v2)
- Logger and README deprecation warnings
- changing the default value of a field can be okay, but will affect users who have not specified the field and should be approached cautiously.
- The general rule here is "don't surprise me": users should not be caught off-guard by unexpected or breaking changes.
## Linting
Each pull request will have the appropriate linters checking the files for any common mistakes. The github action Super Linter is used: [super-linter](https://github.com/github/super-linter). If it is failing you can click on the action and read the logs to figure out the issue. You can also run the github action locally by following these instructions: [run-linter-locally.md](https://github.com/github/super-linter/blob/main/docs/run-linter-locally.md). You can find more information on each of the linters in the super linter readme.
## Testing
Sufficient unit tests must be created. New plugins must always contain
some unit tests. Bug fixes and enhancements should include new tests, but
they can be allowed if the reviewer thinks it would not be worth the effort.
[Table Driven Tests](https://github.com/golang/go/wiki/TableDrivenTests) are
encouraged to reduce boiler plate in unit tests.
The [stretchr/testify](https://github.com/stretchr/testify) library should be
used for assertions within the tests when possible, with preference towards
github.com/stretchr/testify/require.
Primarily use the require package to avoid cascading errors:
```go
assert.Equal(t, lhs, rhs) # avoid
require.Equal(t, lhs, rhs) # good
```
## Configuration
The config file is the primary interface and should be carefully scrutinized.
Ensure the [[SampleConfig]] and
[README](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/EXAMPLE_README.md)
match with the current standards.
READMEs should:
- be spaces, not tabs
- be indented consistently, matching other READMEs
- have two `#` for comments
- have one `#` for defaults, which should always match the default value of the plugin
- include all appropriate types as a list for enumerable field types
- include a useful example, avoiding "example", "test", etc.
- include tips for any common problems
- include example output from the plugin, if input/processor/aggregator/parser/serializer
## Metric Schema
Telegraf metrics are heavily based on InfluxDB points, but have some
extensions to support other outputs and metadata.
New metrics must follow the recommended
[schema design](https://docs.influxdata.com/influxdb/latest/concepts/schema_and_data_layout/).
Each metric should be evaluated for _series cardinality_, proper use of tags vs
fields, and should use existing patterns for encoding metrics.
Metrics use `snake_case` naming style.
### Enumerations
Generally enumeration data should be encoded as a tag. In some cases it may
be desirable to also include the data as an integer field:
```shell
net_response,result=success result_code=0i
```
### Histograms
Use tags for each range with the `le` tag, and `+Inf` for the values out of
range. This format is inspired by the Prometheus project:
```shell
cpu,le=0.0 usage_idle_bucket=0i 1486998330000000000
cpu,le=50.0 usage_idle_bucket=2i 1486998330000000000
cpu,le=100.0 usage_idle_bucket=2i 1486998330000000000
cpu,le=+Inf usage_idle_bucket=2i 1486998330000000000
```
### Lists
Lists are tricky, but the general technique is to encode using a tag, creating
one series be item in the list.
### Counters
Counters retrieved from other projects often are in one of two styles,
monotonically increasing without reset and reset on each interval. No attempt
should be made to switch between these two styles but if given the option it
is preferred to use the non-resetting variant. This style is more resilient in
the face of downtime and does not contain a fixed time element.
### Source tag
When metrics are gathered from another host, the metric schema should have a tag
named "source" that contains the other host's name. See [this feature
request](https://github.com/influxdata/telegraf/issues/4413) for details.
The metric schema doesn't need to have a tag for the host running
telegraf. Telegraf agent code can add a tag named "host" and by default
containing the hostname reported by the kernel. This can be configured through
the "hostname" and "omit_hostname" agent settings.
## Go Best Practices
In general code should follow best practice describe in [Code Review
Comments](https://github.com/golang/go/wiki/CodeReviewComments).
### Networking
All network operations should have appropriate timeouts. The ability to
cancel the option, preferably using a context, is desirable but not always
worth the implementation complexity.
### Channels
Channels should be used in judiciously as they often complicate the design and
can easily be used improperly. Only use them when they are needed.

View file

@ -0,0 +1,81 @@
# Sample Configuration
The sample config file is generated from a results of the `SampleConfig()` functions of the plugin.
You can generate a full sample
config:
```shell
telegraf config
```
You can also generate the config for a particular plugin using the `-usage`
option:
```shell
telegraf --usage influxdb
```
## Style
In the config file we use 2-space indention. Since the config is
[TOML](https://github.com/toml-lang/toml) the indention has no meaning.
Documentation is double commented, full sentences, and ends with a period.
```toml
## This text describes what an the exchange_type option does.
# exchange_type = "topic"
```
Try to give every parameter a default value whenever possible. If a
parameter does not have a default or must frequently be changed then have it
uncommented.
```toml
## Brokers are the AMQP brokers to connect to.
brokers = ["amqp://localhost:5672"]
```
Options where the default value is usually sufficient are normally commented
out. The commented out value is the default.
```toml
## What an exchange type is.
# exchange_type = "topic"
```
If you want to show an example of a possible setting filled out that is
different from the default, show both:
```toml
## Static routing key. Used when no routing_tag is set or as a fallback
## when the tag specified in routing tag is not found.
## example: routing_key = "telegraf"
# routing_key = ""
```
Unless parameters are closely related, add a space between them. Usually
parameters is closely related have a single description.
```toml
## If true, queue will be declared as an exclusive queue.
# queue_exclusive = false
## If true, queue will be declared as an auto deleted queue.
# queue_auto_delete = false
## Authentication credentials for the PLAIN auth_method.
# username = ""
# password = ""
```
Parameters should usually be describable in a few sentences. If it takes
much more than this, try to provide a shorter explanation and provide a more
complex description in the Configuration section of the plugins
[README](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/example)
Boolean parameters should be used judiciously. You should try to think of
something better since they don't scale well, things are often not truly
boolean, and frequently end up with implicit dependencies: this option does
something if this and this are also set.

View file

@ -0,0 +1,145 @@
# State-persistence for plugins
## Purpose
Plugin state-persistence allows a plugin to save its state across restarts of
Telegraf. This might be necessary if data-input (or output) is stateful and
depends on the result of a previous operation.
If you for example query data from a service providing a `next` token, your
plugin would need to know the last token received in order to make the next
query. However, this token is lost after a restart of Telegraf if not persisted
and thus your only chance is to restart the query chain potentially resulting
in handling redundant data producing unnecessary traffic.
This is where state-persistence comes into play. The state-persistence framework
allows your plugin to store a _state_ on shutdown and load that _state_ again
on startup of Telegraf.
## State format
The _state_ of a plugin can be any structure or datatype that is serializable
using Golang's JSON serializer. It can be a key-value map or a more complex
structure. E.g.
```go
type MyState struct {
CurrentToken string
LastToken string
NextToken string
FilterIDs []int64
}
```
would represent a valid state.
## Implementation
To enable state-persistence in your plugin you need to implement the
`StatefulPlugin` interface defined in `plugin.go`. The interface looks as
follows:
```go
type StatefulPlugin interface {
GetState() interface{}
SetState(state interface{}) error
}
```
The `GetState()` function should return the current state of the plugin
(see [state format](#state-format)). Please note that this function should
_always_ succeed and should always be callable directly after `Init()`. So make
sure your relevant data-structures are initialized in `Init` to prevent panics.
Telegraf will call the `GetState()` function on shutdown and will then compile
an overall Telegraf state from the information of all stateful plugins. This
state is then persisted to disk if (and only if) the `statefile` option in the
`agent` section is set. You do _not_ need take care of any serialization or
writing, Telegraf will handle this for you.
When starting Telegraf, the overall persisted Telegraf state will be restored,
if `statefile` is set. To do so, the `SetState()` function is called with the
deserialized state of the plugin. Please note that this function is called
directly _after_ the `Init()` function of your plugin. You need to make sure
that the given state is what you expect using a type-assertion! Make sure this
won't panic but rather return a meaningful error.
To assign the state to the correct plugin, Telegraf relies on a plugin ID.
See the ["State assignment" section](#state-assignment) for more details on
the procedure and ["Plugin Identifier" section](#plugin-identifier) for more
details on ID generation.
## State assignment
When restoring the state on loading, Telegraf needs to ensure that each plugin
_instance_ gets the correct state. To do so, a plugin ID is used. By default
this ID is generated automatically for each plugin instance but can be
overwritten if necessary (see [Plugin Identifier](#plugin-identifier)).
State assignment needs to be able to handle multiple instances of the same
plugin type correctly, e.g. if the user has configured multiple instances of
your plugin with different `server` settings. Here, the state saved for
`foo.example.com` needs to be restored to the plugin instance handling
`foo.example.com` on next startup of Telegraf and should _not_ end up at server
`bar.example.com`. So the plugin identifier used for the assignment should be
consistent over restarts of Telegraf.
In case plugin instances are added to the configuration between restarts, no
state is restored _for those instances_. Furthermore, all states referencing
plugin identifier that are no-longer valid are dropped and will be ignored. This
can happen in case plugin instances are removed or changed in ID.
## Plugin Identifier
As outlined above, the plugin identifier (plugin ID) is crucial when assigning
states to plugin instances. By default, Telegraf will automatically generate an
identifier for each plugin configured when starting up. The ID is consistent
over restarts of Telegraf and is based on the _entire configuration_ of the
plugin. This means for each plugin instance, all settings in the configuration
will be concatenated and hashed to derive the ID. The resulting ID will then be
used in both save and restore operations making sure the state ends up in a
plugin with _exactly_ the same configuration that created the state.
However, this also means that the plugin identifier _is changing_ whenever _any_
of the configuration setting is changed! For example if your plugin is defined
as
```go
type MyPlugin struct {
Server string `toml:"server"`
Token string `toml:"token"`
Timeout config.Duration `toml:"timeout"`
offset int
}
```
with `offset` being your state, the plugin ID will change if a user changes the
`timeout` setting in the configuration file. As a consequence the state cannot
be restored. This might be undesirable for your plugin, therefore you can
overwrite the ID generation by implementing the `PluginWithID` interface (see
`plugin.go`). This interface defines a `ID() string` function returning the
identifier o the current plugin _instance_. When implementing this function you
should take the following criteria into account:
1. The identifier has to be _unique_ for your plugin _instance_ (not only for
the plugin type) to make sure the state is assigned to the correct instance.
1. The identifier has to be _consistent_ across startups/restarts of Telegraf
as otherwise the state cannot be restored. Make sure the order of
configuration settings doesn't matter.
1. Make sure to _include all settings relevant for state assignment_. In
the example above, the plugin's `token` setting might or might not be
relevant to identify the plugin instance.
1. Make sure to _leave out all settings irrelevant for state assignment_. In
the example above, the plugin's `timeout` setting likely is not relevant
for the state and can be left out.
Which settings are relevant for the state are plugin specific. For example, if
the `offset` is a property of the _server_ the `token` setting is irrelevant.
However, if the `offset` is specific for a certain user suddenly the `token`
setting is relevant.
Alternatively to generating an identifier automatically, the plugin can allow
the user to specify that ID directly in a configuration setting. However, please
not that this might lead to colliding IDs in larger setups and should thus be
avoided.

View file

@ -0,0 +1,6 @@
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins

View file

@ -0,0 +1,7 @@
Secrets defined by a store are referenced with `@{<store-id>:<secret_key>}`
the Telegraf configuration. Only certain Telegraf plugins and options of
support secret stores. To see which plugins and options support
secrets, see their respective documentation (e.g.
`plugins/outputs/influxdb/README.md`). If the plugin's README has the
`Secret-store support` section, it will detail which options support secret
store usage.

View file

@ -0,0 +1,8 @@
This plugin is a service input. Normal plugins gather metrics determined by the
interval setting. Service plugins start a service to listen and wait for
metrics or events to occur. Service plugins have two key differences from
normal plugins:
1. The global or plugin specific `interval` setting may not apply
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
output for this plugin

View file

@ -0,0 +1,14 @@
In addition to the plugin-specific and global configuration settings the plugin
supports options for specifying the behavior when experiencing startup errors
using the `startup_error_behavior` setting. Available values are:
- `error`: Telegraf with stop and exit in case of startup errors. This is the
default behavior.
- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
but continues processing for all other plugins.
- `retry`: Telegraf will try to startup the plugin in every gather or write
cycle in case of startup errors. The plugin is disabled until
the startup succeeds.
- `probe`: Telegraf will probe the plugin's function (if possible) and disables the plugin
in case probing fails. If the plugin does not support probing, Telegraf will
behave as if `ignore` was set instead.

71
docs/specs/README.md Normal file
View file

@ -0,0 +1,71 @@
# Telegraf Specification Overview
## Objective
Define and layout the Telegraf specification process.
## Overview
The general goal of a spec is to detail the work that needs to get accomplished
for a new feature. A developer should be able to pick up a spec and have a
decent understanding of the objective, the steps required, and most of the
general design decisions.
The specs can then live in the Telegraf repository to share and involve the
community in the process of planning larger changes or new features. The specs
also serve as a public historical record for changes.
## Process
The general workflow is for a user to put up a PR with a spec outlining the
task, have any discussion in the PR, reach consensus, and ultimately commit
the finished spec to the repo.
While researching a new feature may involve an investment of time, writing the
spec should be relatively quick. It should not take hours of time.
## Spec naming
Please name the actual file prefixed with `tsd` and the next available
number, for example:
* tsd-001-agent-write-ahead-log.md
* tsd-002-inputs-apache-increase-timeout.md
* tsd-003-serializers-parquet.md
All lower-case and separated by hyphens.
## What belongs in a spec
A spec should involve the creation of a markdown file with at least an objective
and overview:
* Objective (required) - One sentence headline
* Overview (required) - Explain the reasoning for the new feature and any
historical information. Answer the why this is needed.
Please feel free to make a copy the template.md and start with that.
The user is free to add additional sections or parts in order to express and
convey a new feature. For example this might include:
* Keywords - Help identify what the spec is about
* Is/Is-not - Explicitly state what this change includes and does not include
* Prior Art - Point at existing or previous PRs, issues, or other works that
demonstrate the feature or need for it.
* Open Questions - Section with open questions that can get captured in
updates to the PR
## Changing existing specs
Small changes which are non-substantive, like grammar or formatting are gladly
accepted.
After a feature is complete it may make sense to come back and update a spec
based on the final result.
Other changes that make substantive changes are entirely up to the maintainers
whether the edits to an existing RFC will be accepted. In general, finished
specs should be considered complete and done, however, priorities, details, or
other situations may evolve over time and as such introduce the need to make
updates.

20
docs/specs/template.md Normal file
View file

@ -0,0 +1,20 @@
# Title
## Objective
One sentence explanation of the feature.
## Overview
Background and details about the feature.
## Keywords
A few items to specify what areas of Telegraf this spec affects (e.g. outputs,
inputs, processors, aggregators, agent, packaging, etc.)
## Is/Is-not
## Prior art
## Open questions

View file

@ -0,0 +1,182 @@
# Plugin and Plugin Option Deprecation
## Objective
Specifies the process of deprecating and removing plugins, plugin settings
including values of those settings or features.
## Keywords
procedure, removal, all plugins
## Overview
Over time the number of plugins, plugin options and plugin features grow and
some of those plugins or options are either not relevant anymore, have been
superseded or subsumed by other plugins or options. To be able to remove those,
this specification defines a process to deprecate plugins, plugin options and
plugin features including a timeline and minimal time-frames. Additionally, the
specification defines a framework to annotate deprecations in the code and
inform users about such deprecations.
## User experience
In the deprecation phase a warning will be shown at Telegraf startup with the
following content
```text
Plugin "inputs.logparser" deprecated since version 1.15.0 and will be removed in 1.40.0: use 'inputs.tail' with 'grok' data format instead
```
Similar warnings will be shown when removing plugin options or option values.
This provides users with time to replace the deprecated plugin in their
configuration file.
After the shown release (`v1.40.0` in this case) the warning will be promoted
to an error preventing Telegraf from starting. The user now has to adapt the
configuration file to start Telegraf.
## Time frames and considerations
When deprecating parts of Telegraf, it is important to provide users with enough
time to migrate to alternative solutions before actually removing those parts.
In general, plugins, plugin options or option values should only be deprecated
if a suitable alternative exists! In those cases, the deprecations should
predate the removal by at least one and a half years. In current release terms
this corresponds to six minor-versions. However, there might be circumstances
requiring a prolonged time between deprecation and removal to ensure a smooth
transition for users.
Versions between deprecation and removal of plugins, plugin options or option
values, Telegraf must log a *warning* on startup including information about
the version introducing the deprecation, the version of removal and an
user-facing hint on suitable replacements. In this phase Telegraf should
operate normally even with deprecated plugins, plugin options or option values
being set in the configuration files.
Starting from the removal version, Telegraf must show an *error* message for
deprecated plugins present in the configuration including all information listed
above. Removed plugin options and option values should be handled as invalid
settings in the configuration files and must lead to an error. In this phase,
Telegraf should *stop running* until all deprecated plugins, plugin options and
option values are removed from the configuration files.
## Deprecation Process
The deprecation process comprises the following the steps below.
### File issue
In the filed issue you should outline which plugin, plugin option or feature
you want to deprecate and *why*! Determine in which version the plugin should
be removed.
Try to reach an agreement in the issue before continuing and get a sign off
from the maintainers!
### Submit deprecation pull-request
Send a pull request adding deprecation information to the code and update the
plugin's `README.md` file. Depending on what you want to deprecate this
comprises different locations and steps as detailed below.
Once the deprecation pull-request is merged and Telegraf is released, we have
to wait for the targeted Telegraf version for actually removing the code.
#### Deprecating a plugin
When deprecating a plugin you need to add an entry to the `deprecation.go` file
in the respective plugin category with the following format
```golang
"<plugin name>": {
Since: "<x.y.z format version of the next minor release>",
RemovalIn: "<x.y.z format version of the plugin removal>",
Notice: "<user-facing hint e.g. on replacements>",
},
```
If you for example want to remove the `inputs.logparser` plugin you should add
```golang
"logparser": {
Since: "1.15.0",
RemovalIn: "1.40.0"
Notice: "use 'inputs.tail' with 'grok' data format instead",
},
```
to `plugins/inputs/deprecations.go`. By doing this, Telegraf will show a
deprecation warning to the user starting from version `1.15.0` including the
`Notice` you provided. The plugin can then be remove in version `1.40.0`.
Additionally, you should update the plugin's `README.md` adding a paragraph
mentioning since when the plugin is deprecated, when it will be removed and a
hint to alternatives or replacements. The paragraph should look like this
```text
**Deprecated in version v1.15.0 and scheduled for removal in v1.40.0**:
Please use the [tail][] plugin with the [`grok` data format][grok parser]
instead!
```
#### Deprecating an option
To deprecate a plugin option, remove the option from the `sample.conf` file and
add the deprecation information to the structure field in the code. If you for
for example want to deprecate the `ssl_enabled` option in `inputs.example` you
should add
```golang
type Example struct {
...
SSLEnabled bool `toml:"ssl_enabled" deprecated:"1.3.0;1.40.0;use 'tls_*' options instead"`
}
```
to schedule the setting for removal in version `1.40.0`. The last element of
the `deprecated` tag is a user-facing notice similar to plugin deprecation.
#### Deprecating an option-value
Sometimes, certain option values become deprecated or superseded by other
options or values. To deprecate those option values, remove them from
`sample.conf` and add the deprecation info in the code if the deprecated value
is *actually used* via
```golang
func (e *Example) Init() error {
...
if e.Mode == "old" {
models.PrintOptionDeprecationNotice(telegraf.Warn, "inputs.example", "mode", telegraf.DeprecationInfo{
Since: "1.23.1",
RemovalIn: "1.40.0",
Notice: "use 'v1' instead",
})
}
...
return nil
}
```
This will show a warning if the deprecated `v1` value is used for the `mode`
setting in `inputs.example` with a user-facing notice.
### Submit pull-request for removing code
Once the plugin, plugin option or option-value is deprecated, we have to wait
for the `RemovedIn` release to remove the code. In the examples above, this
would be version `1.40.0`. After all scheduled bugfix-releases are done, with
`1.40.0` being the next release, you can create a pull-request to actually
remove the deprecated code.
Please make sure, you remove the plugin, plugin option or option value and the
code referencing those. This might also comprise the `all` files of your plugin
category, test-cases including those of other plugins, README files or other
documentation. For removed plugins, please keep the deprecation info in
`deprecations.go` so users can find a reference when switching from a really
old version.
Make sure you add an `Important Changes` sections to the `CHANGELOG.md` file
describing the removal with a reference to your PR.

View file

@ -0,0 +1,71 @@
# Telegraf Custom-Builder
## Objective
Provide a tool to build a customized, smaller version of Telegraf with only
the required plugins included.
## Keywords
tool, binary size, customization
## Overview
The Telegraf binary continues to grow as new plugins and features are added
and dependencies are updated. Users running on resource constrained systems
such as embedded-systems or inside containers might suffer from the growth.
This document specifies a tool to build a smaller Telegraf binary tailored to
the plugins configured and actually used, removing unnecessary and unused
plugins. The implementation should be able to cope with configured parsers and
serializers including defaults for those plugin categories. Valid Telegraf
configuration files, including directories containing such files, are the input
to the customization process.
The customization tool might not be available for older versions of Telegraf.
Furthermore, the degree of customization and thus the effective size reduction
might vary across versions. The tool must create a single static Telegraf
binary. Distribution packages or containers are *not* targeted.
## Prior art
[PR #5809](https://github.com/influxdata/telegraf/pull/5809) and
[telegraf-lite-builder](https://github.com/influxdata/telegraf/tree/telegraf-lite-builder/cmd/telegraf-lite-builder):
- Uses docker
- Uses browser:
- Generates a webpage to pick what options you want. User chooses plugins;
does not take a config file
- Build a binary, then minifies by stripping and compressing that binary
- Does some steps that belong in makefile, not builder
- Special case for upx
- Makes gzip, zip, tar.gz
- Uses gopkg.in?
- Can also work from the command line
[PR #8519](https://github.com/influxdata/telegraf/pull/8519)
- User chooses plugins OR provides a config file
[powers/telegraf-build](https://github.com/powersj/telegraf-build)
- User chooses plugins OR provides a config file
- Currently kept in separate repo
- Undoes changes to all.go files
[rawkode/bring-your-own-telegraf](https://github.com/rawkode/bring-your-own-telegraf)
- Uses docker
## Additional information
You might be able to further reduce the binary size of Telegraf by removing
debugging information. This is done by adding `-w` and `-s` to the linker flags
before building `LDFLAGS="-w -s"`.
However, please note that this removes information helpful for debugging issues
in Telegraf.
Additionally, you can use a binary packer such as [UPX](https://upx.github.io/)
to reduce the required *disk* space. This compresses the binary and decompresses
it again at runtime. However, this does not reduce memory footprint at runtime.

View file

@ -0,0 +1,125 @@
# Plugin State-Persistence
## Objective
Retain the state of stateful plugins across restarts of Telegraf.
## Keywords
framework, plugin, stateful, persistence
## Overview
Telegraf contains a number of plugins that hold an internal state while
processing. For some of the plugins this state is important for efficient
processing like the location when reading a large file or when continuously
querying data from a stateful peer requiring for example an offset or the last
queried timestamp. For those plugins it is important to persistent their
internal state over restarts of Telegraf.
It is intended to
- allow for opt-in of plugins to store a state per plugin _instance_
- restore the state for each plugin instances at startup
- track the plugin instances over restarts to relate the stored state with a
corresponding plugin instance
- automatically compute plugin instance IDs based on the plugin configuration
- provide a way to manually specify instance IDs by the user
- _not_ restore states if the plugin configuration changed between runs
- make implementation easy for plugin developers
- make no assumption on the state _content_
The persistence will use the following steps:
- Compute an unique ID for each of the plugin _instances_
- Startup Telegraf plugins calling `Init()`, etc.
- Initialize persistence framework with the user specified `statefile` location
and load the state if present
- Determine all stateful plugin instances by fulfilling the `StatefulPlugin`
interface
- Restore plugin states (if any) for each plugin ID present in the state-file
- Run data-collection etc...
- On shutdown, stopping all Telegraf plugins calling `Stop()` or `Close()`
depending on the plugin type
- Query the state of all registered stateful plugins state
- Create an overall state-map with the plugin instance ID as a key and the
serialized plugin state as value.
- Marshal the overall state-map and store to disk
Potential users of this functionality are plugins continuously querying
endpoints with information of a previous query (e.g. timestamps, offsets,
transaction tokens, etc.) The following plugins are known to have an internal
state. This is not a comprehensive list.
- `inputs.win_eventlog` ([PR #8281](https://github.com/influxdata/telegraf/pull/8281))
- `inputs.docker_log` ([PR #7749](https://github.com/influxdata/telegraf/pull/7749))
- `inputs.tail` (file offset)
- `inputs.cloudwatch` (`windowStart`/`windowEnd` parameters)
- `inputs.stackdriver` (`prevEnd` parameter)
### Plugin ID computation
The plugin ID is computed based on the configuration options specified for the
plugin instance. To generate the ID all settings are extracted as `string`
key-value pairs with the option name being the key and the value being the
configuration option setting. For nested configuration options, e.g. if the
plugins has a sub-table, the options are flattened with a canonical key. The
canonical key elements must be concatenated with a dot (`.`) separator. In case
the sub-element is a list of tables, the key must include the index of each
table prefixed by a hash sign i.e. `<parent>#<index>.<child>`.
The resulting key-value pairs of configuration options are then sorted by the
key in lexical order to make the resulting ID invariant against changes in the
order of configuration options. The key and the value of each pair are joined
by a colon (`:`) to a single `string`.
Finally, a SHA256 sum is computed across all key-value strings separated by a
`null` byte. The HEX representation of the resulting SHA256 is used as the
plugin instance ID.
### State serialization format
The overall Telegraf state maps the plugin IDs (keys) to the serialized state
of the corresponding plugin (values). The state data returned by stateful
plugins is serialized to JSON. The resulting byte-sequence is used as the value
for the overall state. On-disk, the overall state of Telegraf is stored as JSON.
To restore the state of a plugin, the overall Telegraf state is first
deserialized from the on-disk JSON data and a lookup for the plugin ID is
performed in the resulting map. The value, if found, is then deserialized to the
plugin's state data-structure and provided to the plugin after calling `Init()`.
## Is / Is-not
### Is
- A framework to persist states over restarts of Telegraf
- A simple local state store
- A way to restore plugin states between restarts without configuration changes
- A unified API for plugins to use when requiring persistence of a state
### Is-Not
- A remote storage framework
- A way to store anything beyond fundamental plugin states
- A data-store or database
- A way to reassign plugin states if their configuration changes
- A tool to interactively adding/removing/modifying states of plugins
- A persistence guarantee beyond clean shutdown (i.e. no crash resistance)
## Prior art
- [PR #8281](https://github.com/influxdata/telegraf/pull/8281): Stores Windows
event-log bookmarks in the registry
- [PR #7749](https://github.com/influxdata/telegraf/pull/7749): Stores container
ID and log offset to a file at a user-provided path
- [PR #7537](https://github.com/influxdata/telegraf/pull/7537): Provides a
global state object and periodically queries plugin states to store the state
object to a JSON file. This approach does not provide a ID per plugin
_instance_ so it seems like there is only a single state for a plugin _type_
- [PR #9476](https://github.com/influxdata/telegraf/pull/9476): Register
stateful plugins to persister and automatically assigns an ID to plugin
_instances_ based on the configuration. The approach also allows to overwrite
the automatic ID e.g. with user specified data. It uses the plugin instance ID
to store/restore state to the same plugin instance and queries the plugin
state on shutdown and write file (currently JSON).

View file

@ -0,0 +1,69 @@
# Configuration Migration
## Objective
Provides a subcommand and framework to migrate configurations containing
deprecated settings to a corresponding recent configuration.
## Keywords
configuration, deprecation, telegraf command
## Overview
With the deprecation framework of [TSD-001](tsd-001-deprecation.md) implemented
we see more and more plugins and options being scheduled for removal in the
future. Furthermore, deprecations become visible to the user due to the warnings
issued for removed plugins, plugin options and plugin option values.
To aid the user in mitigating deprecated configuration settings this
specifications proposes the implementation of a `migrate` sub-command to the
Telegraf `config` command for automatically migrate the user's existing
configuration files away from the deprecated settings to an equivalent, recent
configuration. Furthermore, the specification describes the layout and
functionality of a plugin-based migration framework to implement migrations.
### `migrate` sub-command
The `migrate` sub-command of the `config` command should take a set of
configuration files and configuration directories and apply available migrations
to deprecated plugins, plugin options or plugin option-values in order to
generate new configuration files that do not make use of deprecated options.
In the process, the migration procedure must ensure that only plugins with
applicable migrations are modified. Existing configuration must be kept and not
be overwritten without manual confirmation of the user. This should be
accomplished by storing modified configuration files with a `.migrated` suffix
and leaving it to the user to overwrite the existing configuration with the
generated counterparts. If no migration is applied in a configuration file, the
command might not generate a new file and leave the original file untouched.
During migration, the configuration, plugin behavior, resulting metrics and
comments should be kept on a best-effort basis. Telegraf must inform the user
about applied migrations and potential changes in the plugin behavior or
resulting metrics. If a plugin cannot be automatically migrated but requires
manual intervention, Telegraf should inform the user.
### Migration implementations
To implement migrations for deprecated plugins, plugin option or plugin option
values, Telegraf must provide a plugin-based infrastructure to register and
apply implemented migrations based on the plugin-type. Only one migration per
plugin-type must be registered.
Developers must implement the required interfaces and register the migration
to the mentioned framework. The developer must provide the possibility to
exclude the migration at build-time according to
[TSD-002](tsd-002-custom-builder.md). Existing migrations can be extended but
must be cumulative such that any previous configuration migration functionality
is kept.
Resulting configurations should generate metrics equivalent to the previous
setup also making use of metric selection, renaming and filtering mechanisms.
In cases this is not possible, there must be a clear information to the user
what to expect and which differences might occur.
A migration can only be informative, i.e. notify the user that a plugin has to
manually be migrated and should point users to additional information.
Deprecated plugins and plugin options must be removed from the migrated
configuration.

View file

@ -0,0 +1,77 @@
# Telegraf Output Buffer Strategy
## Objective
Introduce a new agent-level config option to choose a disk buffer strategy for
output plugin metric queues.
## Overview
Currently, when a Telegraf output metric queue fills, either due to incoming
metrics being too fast or various issues with writing to the output, oldest
metrics are overwritten and never written to the output. This specification
defines a set of options to make this output queue more durable by persisting
pending metrics to disk rather than only an in-memory limited size queue.
## Keywords
output plugins, agent configuration, persist to disk
## Agent Configuration
The configuration is at the agent-level, with options for:
- **Memory**, the current implementation, with no persistence to disk
- **Write-through**, all metrics are also written to disk using a
Write Ahead Log (WAL) file
- **Disk-overflow**, when the memory buffer fills, metrics are flushed to a
WAL file to avoid dropping overflow metrics
As well as an option to specify a directory to store the WAL files on disk,
with a default value. These configurations are global, and no change means
memory only mode, retaining current behavior.
## Metric Ordering and Tracking
Tracking metrics will be accepted on a successful write to the output
destination. Metrics will be written to their appropriate output in the order
they are received in the buffer regardless of which buffer strategy is chosen.
## Disk Utilization and File Handling
Each output plugin has its own in-memory output buffer, and therefore will
each have their own WAL file for buffer persistence. This file may not exist
if Telegraf is successfully able to write all of its metrics without filling
the in-memory buffer in disk-overflow mode, or not at all in memory mode.
Telegraf should use one file per output plugin, and remove entries from the
WAL file as they are written to the output.
Telegraf will not make any attempt to limit the size on disk taken by these
files beyond cleaning up WAL files for metrics that have successfully been
flushed to their output destination. It is the user's responsibility to ensure
these files do not entirely fill the disk, both during Telegraf uptime and
with lingering files from previous instances of the program.
If WAL files exist for an output plugin from previous instances of Telegraf,
they will be picked up and flushed before any new metrics that are written
to the output. This is to ensure that these metrics are not lost, and to
ensure that output write order remains consistent.
Telegraf must additionally provide a way to manually flush WAL files via
some separate plugin or similar. This could be used as a way to ensure that
WAL files are properly written in the event that the output plugin changes
and the WAL file is unable to be detected by a new instance of Telegraf.
This plugin should not be required for use to allow the buffer strategy to
work.
## Is/Is-not
- Is a way to prevent metrics from being dropped due to a full memory buffer
- Is not a way to guarantee data safety in the event of a crash or system failure
- Is not a way to manage file system allocation size, file space will be used
until the disk is full
## Prior art
[Initial issue](https://github.com/influxdata/telegraf/issues/802)
[Loose specification issue](https://github.com/influxdata/telegraf/issues/14805)

View file

@ -0,0 +1,115 @@
# Startup Error Behavior
## Objective
Unified, configurable behavior on retriable startup errors.
## Keywords
inputs, outputs, startup, error, retry
## Overview
Many Telegraf plugins connect to an external service either on the same machine
or via network. On automated startup of Telegraf (e.g. via service) there is no
guarantee that those services are fully started yet, especially when they reside
on a remote host. More and more plugins implement mechanisms to retry reaching
their related service if they failed to do so on startup.
This specification intends to unify the naming of configuration-options, the
values of those options, and their semantic meaning. It describes the behavior
for the different options on handling startup-errors.
Startup errors are all errors occurring in calls to `Start()` for inputs and
service-inputs or `Connect()` for outputs. The behaviors described below
should only be applied in cases where the plugin *explicitly* states that an
startup error is *retriable*. This includes for example network errors
indicating that the host or service is not yet reachable or external
resources, like a machine or file, which are not yet available, but might become
available later. To indicate a retriable startup error the plugin should return
a predefined error-type.
In cases where the error cannot be generally determined be retriable by
the plugin, the plugin might add configuration settings to let the user
configure that property. For example, where an error code indicates a fatal,
non-recoverable error in one case but a non-fatal, recoverable error in another
case.
## Configuration Options and Behaviors
Telegraf must introduce a unified `startup_error_behavior` configuration option
for inputs and output plugins. The option is handled directly by the Telegraf
agent and is not passed down to the plugins. The setting must be available on a
per-plugin basis and defines how Telegraf behaves on startup errors.
For all config option values Telegraf might retry to start the plugin for a
limited number of times during the startup phase before actually processing
data. This corresponds to the current behavior of Telegraf to retry three times
with a fifteen second interval before continuing processing of the plugins.
### `error` behavior
The `error` setting for the `startup_error_behavior` option causes Telegraf to
fail and exit on startup errors. This must be the default behavior.
### `retry` behavior
The `retry` setting for the `startup_error_behavior` option Telegraf must *not*
fail on startup errors and should continue running. Telegraf must retry to
startup the failed plugin in each gather or write cycle, for inputs or for
outputs respectively, for an unlimited number of times. Neither the
plugin's `Gather()` nor `Write()` method is called as long as the startup did
not succeed. Metrics sent to an output plugin will be buffered until the plugin
is actually started. If the metric-buffer limit is reached **metrics might be
dropped**!
In case a plugin signals a partially successful startup, e.g. a subset of the
given endpoints are reachable, Telegraf must try to fully startup the remaining
endpoints by calling `Start()` or `Connect()`, respectively, until full startup
is reached **and** trigger the plugin's `Gather()` nor `Write()` methods.
### `ignore` behavior
When using the `ignore` setting for the `startup_error_behavior` option Telegraf
must *not* fail on startup errors and should continue running. On startup error,
Telegraf must ignore the plugin as-if it was not configured at all, i.e. the
plugin must be completely removed from processing.
### `probe` behavior
When using the `probe` setting for the `startup_error_behavior` option Telegraf
must *not* fail on startup errors and should continue running. On startup error,
Telegraf must ignore the plugin as-if it was not configured at all, i.e. the
plugin must be completely removed from processing, similar to the `ignore`
behavior. Additionally, Telegraf must probe the plugin (as defined in
[TSD-009][tsd_009]) after startup, if it implements the `ProbePlugin` interface.
If probing is available *and* returns an error Telegraf must *ignore* the
plugin as-if it was not configured at all.
[tsd_009]: /docs/specs/tsd-009-probe-on-startup.md
## Plugin Requirements
Plugins participating in handling startup errors must implement the `Start()`
or `Connect()` method for inputs and outputs respectively. Those methods must be
safe to be called multiple times during retries without leaking resources or
causing issues in the service used.
Furthermore, the `Close()` method of the plugins must be safe to be called for
cases where the startup failed without causing panics.
The plugins should return a `nil` error during startup to indicate a successful
startup or a retriable error (via predefined error type) to enable the defined
startup error behaviors. A non-retriable error (via predefined error type) or
a generic error will bypass the startup error behaviors and Telegraf must fail
and exit in the startup phase.
## Related Issues
- [#8586](https://github.com/influxdata/telegraf/issues/8586) `inputs.postgresql`
- [#9778](https://github.com/influxdata/telegraf/issues/9778) `outputs.kafka`
- [#13278](https://github.com/influxdata/telegraf/issues/13278) `outputs.cratedb`
- [#13746](https://github.com/influxdata/telegraf/issues/13746) `inputs.amqp_consumer`
- [#14365](https://github.com/influxdata/telegraf/issues/14365) `outputs.postgresql`
- [#14603](https://github.com/influxdata/telegraf/issues/14603) `inputs.nvidia-smi`
- [#14603](https://github.com/influxdata/telegraf/issues/14603) `inputs.rocm-smi`

View file

@ -0,0 +1,75 @@
# URL-Based Config Behavior
## Objective
Define the retry and reload behavior of remote URLs that are passed as config to
Telegraf. In terms of retry, currently Telegraf will attempt to load a remote
URL three times and then exit. In terms of reload, Telegraf does not have the
capability to reload remote URL based configs. This spec seeks to allow for
options for the user to further these capabilities.
## Keywords
config, error, retry, reload
## Overview
Telegraf allows for loading configurations from local files, directories, and
files via a URL. In order to allow situations where a configuration file is not
yet available or due to a flaky network, the first proposal is to introduce a
new CLI flag: `--url-config-retry-attempts`. This flag would continue to default
to three and would specify the number of retries to attempt to get a remote URL
during the initial startup of Telegraf.
```sh
--config-url-retry-attempts=3 Number of times to attempt to obtain a remote
configuration via a URL during startup. Set to
-1 for unlimited attempts.
```
These attempts would block Telegraf from starting up completely until success or
until we have run out of attempts and exit.
Once Telegraf is up and running, users can use the `--watch` flag to enable
watching local files for changes and if/when changes are made, then reload
Telegraf with the new configuration. For remote URLs, I propose a new CLI flag:
`--url-config-check-interval`. This flag would set an internal timer that when
it goes off, would check for an update to a remote URL file.
```sh
--config-url-watch-interval=0s Time duration to check for updates to URL based
configuration files. Disabled by default.
```
At each interval, Telegraf would send an HTTP HEAD request to the configuration
URL, here is an example curl HEAD request and output:
```sh
$ curl --head http://localhost:8000/config.toml
HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/3.12.3
Date: Mon, 29 Apr 2024 18:18:56 GMT
Content-type: application/octet-stream
Content-Length: 1336
Last-Modified: Mon, 29 Apr 2024 11:44:19 GMT
```
The proposal then is to store the last-modified value when we first obtain the
file and compare the value at each interval. No need to parse the value, just
store the raw string. If there is a difference, trigger a reload.
If anything other than 2xx response code is returned from the HEAD request,
Telegraf would print a warning message and retry at the next interval. Telegraf
will continue to run the existing configuration with no change.
If the value of last-modified is empty, while very unlikely, then Telegraf would
ignore this configuration file. Telegraf will print a warning message once about
the missing field.
## Relevant Issues
* Configuration capabilities to retry for loading config via URL #[8854][]
* Telegraf reloads URL-based/remote config on a specified interval #[8730][]
[8854]: https://github.com/influxdata/telegraf/issues/8854
[8730]: https://github.com/influxdata/telegraf/issues/8730

View file

@ -0,0 +1,78 @@
# Partial write error handling
## Objective
Provide a way to pass information about partial metric write errors from an
output to the output model.
## Keywords
output plugins, write, error, output model, metric, buffer
## Overview
The output model wrapping each output plugin buffers metrics to be able to batch
those metrics for more efficient sending. In each flush cycle, the model
collects a batch of metrics and hands it over to the output plugin for writing
through the `Write` method. Currently, if writing succeeds (i.e. no error is
returned), _all metrics of the batch_ are removed from the buffer and are marked
as __accepted__ both in terms of statistics as well as in tracking-metric terms.
If writing fails (i.e. any error is returned), _all metrics of the batch_ are
__kept__ in the buffer for requeueing them in the next write cycle.
Issues arise when an output plugin cannot write all metrics of a batch bit only
some to its service endpoint, e.g. due to the metrics being serializable or if
metrics are selectively rejected by the service on the server side. This might
happen when reaching submission limits, violating service constraints e.g.
by out-of-order sends, or due to invalid characters in the serialited metric.
In those cases, an output currently is only able to accept or reject the
_complete batch of metrics_ as there is no mechanism to inform the model (and
in turn the buffer) that only _some_ of the metrics in the batch were failing.
As a consequence, outputs often _accept_ the batch to avoid a requeueing of the
failing metrics for the next flush interval. This distorts statistics of
accepted metrics and causes misleading log messages saying all metrics were
written sucessfully which is not true. Even worse, for outputs ending-up with
partial writes, e.g. only the first half of the metrics can be written to the
service, there is no way of telling the model to selectively accept the actually
written metrics and in turn those outputs must internally buffer the remaining,
unwritten metrics leading to a duplication of buffering logic and adding to code
complexity.
This specification aims at defining the handling of partially successful writes
and introduces the concept of a special _partial write error_ type to reflect
partial writes and partial serialization overcoming the aforementioned issues
and limitations.
To do so, the _partial write error_ error type must contain a list of
successfully written metrics, to be marked __accepted__, both in terms of
statistics as well as in terms of metric tracking, and must be removed from the
buffer. Furthermore, the error must contain a list of metrics that cannot be
sent or serialized and cannot be retried. These metrics must be marked as
__rejected__, both in terms of statistics as well as in terms of metric
tracking, and must be removed from the buffer.
The error may contain a list of metrics not-yet written to be __kept__ for the
next write cylce. Those metrics must not be marked and must be kept in the
buffer. If the error does not contain the list of not-yet written metrics, this
list must be inferred using the accept and reject lists mentioned above.
To allow the model and the buffer to correctly handle tracking metrics ending up
in the buffer and output the tracking information must be preserved during
communication between the output plugin, the model and the buffer through the
specified error. To do so, all metric lists should be communicated as indices
into the batch to be able to handle tracking metrics correctly.
For backward compatibility and simplicity output plugins can return a `nil`
error to indicate that __all__ metrics of the batch are __accepted__. Similarly,
returing an error _not_ being a _partial write error_ indicates that __all__
metrics of the batch should be __kept__ in the buffer for the next write cycle.
## Related Issues
- [issue #11942](https://github.com/influxdata/telegraf/issues/11942) for
contradicting log messages
- [issue #14802](https://github.com/influxdata/telegraf/issues/14802) for
rate-limiting multiple batch sends
- [issue #15908](https://github.com/influxdata/telegraf/issues/15908) for
infinite loop if single metrics cannot be written

View file

@ -0,0 +1,68 @@
# Probing plugins after startup
## Objective
Allow Telegraf to probe plugins during startup to enable enhanced plugin error
detection like availability of hardware or services
## Keywords
inputs, outputs, startup, probe, error, ignore, behavior
## Overview
When plugins are first instantiated, Telegraf will call the plugin's `Start()`
method (for inputs) or `Connect()` (for outputs) which will initialize its
configuration based off of config options and the running environment. It is
sometimes the case that while the initialization step succeeds, the upstream
service in which the plugin relies on is not actually running, or is not capable
of being communicated with due to incorrect configuration or environmental
problems. In situations like this, Telegraf does not detect that the plugin's
upstream service is not functioning properly, and thus it will continually call
the plugin during each `Gather()` iteration. This often has the effect of
polluting journald and system logs with voluminous error messages, which creates
issues for system administrators who rely on such logs to identify other
unrelated system problems.
More background discussion on this option, including other possible avenues, can
be viewed [here](https://github.com/influxdata/telegraf/issues/16028).
## Probing
Probing is an action whereby the plugin should ensure that the plugin will be
fully functional on a best effort basis. This may comprise communicating with
its external service, trying to access required devices, entities or executables
etc to ensure that the plugin will not produce errors during e.g. data collection
or data output. Probing must *not* produce, process or output any metrics.
Plugins that support probing must implement the `ProbePlugin` interface. Such
plugins must behave in the following manner:
1. Return an error if the external dependencies (hardware, services,
executables, etc.) of the plugin are not available.
2. Return an error if information cannot be gathered (in the case of inputs) or
sent (in the case of outputs) due to unrecoverable issues. For example, invalid
authentication, missing permissions, or non-existent endpoints.
3. Otherwise, return `nil` indicating the plugin will be fully functional.
## Plugin Requirements
Plugins that allow probing must implement the `ProbePlugin` interface. The
exact implementation depends on the plugin's functionality and requirements,
but generally it should take the same actions as it would during normal operation
e.g. calling `Gather()` or `Write()` and check if errors occur. If probing fails,
it must be safe to call the plugin's `Close()` method.
Input plugins must *not* produce metrics, output plugins must *not* send any
metrics to the service. Plugins must *not* influence the later data processing or
collection by modifying the internal state of the plugin or the external state of the
service or hardware. For example, file-offsets or other service states must be
reset to not lose data during the first gather or write cycle.
Plugins must return `nil` upon successful probing or an error otherwise.
## Related Issues
- [#16028](https://github.com/influxdata/telegraf/issues/16028)
- [#15916](https://github.com/influxdata/telegraf/pull/15916)
- [#16001](https://github.com/influxdata/telegraf/pull/16001)