1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,232 @@
# SQL Output Plugin
This plugin writes metrics to a supported SQL database using a simple,
hard-coded database schema. There is a table for each metric type with the
table name corresponding to the metric name. There is a column per field
and a column per tag with an optional column for the metric timestamp.
A row is written for every metric. This means multiple metrics are never
merged into a single row, even if they have the same metric name, tags, and
timestamp.
The plugin uses Golang's generic "database/sql" interface and third party
drivers. See the driver-specific section for a list of supported drivers
and details.
⭐ Telegraf v1.19.0
🏷️ datastore
💻 all
## Getting started
To use the plugin, set the driver setting to the driver name appropriate for
your database. Then set the data source name (DSN). The format of the DSN varies
by driver but often includes a username, password, the database instance to use,
and the hostname of the database server. The user account must have privileges
to insert rows and create tables.
## Generated SQL
The plugin generates simple ANSI/ISO SQL that is likely to work on any DBMS. It
doesn't use language features that are specific to a particular DBMS. If you
want to use a feature that is specific to a particular DBMS, you may be able to
set it up manually outside of this plugin or through the init_sql setting.
The insert statements generated by the plugin use placeholder parameters. Most
database drivers use question marks as placeholders but postgres uses indexed
dollar signs. The plugin chooses which placeholder style to use depending on the
driver selected.
Through the nature of the inputs plugins, the amounts of columns inserted within
rows for a given metric may differ. Since the tables are created based on the
tags and fields available within an input metric, it's possible the created
table won't contain all the necessary columns. You might need to initialize
the schema yourself, to avoid this scenario.
## Advanced options
When the plugin first connects it runs SQL from the init_sql setting, allowing
you to perform custom initialization for the connection.
Before inserting a row, the plugin checks whether the table exists. If it
doesn't exist, the plugin creates the table. The existence check and the table
creation statements can be changed through template settings. The template
settings allows you to have the plugin create customized tables or skip table
creation entirely by setting the check template to any query that executes
without error, such as "select 1".
The name of the timestamp column is "timestamp" but it can be changed with the
timestamp\_column setting. The timestamp column can be completely disabled by
setting it to "".
By changing the table creation template, it's possible with some databases to
save a row insertion timestamp. You can add an additional column with a default
value to the template, like "CREATE TABLE {TABLE}(insertion_timestamp TIMESTAMP
DEFAULT CURRENT\_TIMESTAMP, {COLUMNS})".
The mapping of metric types to sql column types can be customized through the
convert settings.
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Configuration
```toml @sample.conf
# Save metrics to an SQL Database
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
driver = ""
## Data source name
## The format of the data source name is different for each database driver.
## See the plugin readme for details.
data_source_name = ""
## Timestamp column name, set to empty to ignore the timestamp
# timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
## {TAG_COLUMN_NAMES} - tag column definitions (list of quoted identifiers)
## {TIMESTAMP_COLUMN_NAME} - the name of the time stamp column, as configured above
# table_template = "CREATE TABLE {TABLE}({COLUMNS})"
## NOTE: For the clickhouse driver the default is:
# table_template = "CREATE TABLE {TABLE}({COLUMNS}) ORDER BY ({TAG_COLUMN_NAMES}, {TIMESTAMP_COLUMN_NAME})"
## Table existence check template
## Available template variables:
## {TABLE} - tablename as a quoted identifier
# table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL
# init_sql = ""
## Maximum amount of time a connection may be idle. "0s" means connections are
## never closed due to idle time.
# connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections
## are never closed due to age.
# connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
# connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
# connection_max_open = 0
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
## plugin definition, otherwise additional config options are read as part of
## the table
## Metric type to SQL type conversion
## The values on the left are the data types Telegraf has and the values on
## the right are the data types Telegraf will use when sending to a database.
##
## The database values used must be data types the destination database
## understands. It is up to the user to ensure that the selected data type is
## available in the database they are using. Refer to your database
## documentation for what data types are available and supported.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
# ## This setting controls the behavior of the unsigned value. By default the
# ## setting will take the integer value and append the unsigned value to it. The other
# ## option is "literal", which will use the actual value the user provides to
# ## the unsigned option. This is useful for a database like ClickHouse where
# ## the unsigned value should use a value like "uint64".
# # conversion_style = "unsigned_suffix"
```
## Driver-specific information
### go-sql-driver/mysql
MySQL default quoting differs from standard ANSI/ISO SQL quoting. You must use
MySQL's ANSI\_QUOTES mode with this plugin. You can enable this mode by using
the setting `init_sql = "SET sql_mode='ANSI_QUOTES';"` or through a command-line
option when running MySQL. See MySQL's docs for [details on
ANSI\_QUOTES][mysql-quotes] and [how to set the SQL mode][mysql-mode].
You can use a DSN of the format "username:password@tcp(host:port)/dbname". See
the [driver docs][mysql-driver] for details.
[mysql-quotes]: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_ansi_quotes
[mysql-mode]: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting
[mysql-driver]: https://github.com/go-sql-driver/mysql
### jackc/pgx
You can use a DSN of the format
"postgres://username:password@host:port/dbname". See the [driver
docs](https://github.com/jackc/pgx) for more details.
### modernc.org/sqlite
It is not supported on windows/386, mips, and mips64 platforms.
The DSN is a filename or url with scheme "file:". See the [driver
docs](https://modernc.org/sqlite) for details.
### clickhouse
#### DSN
Note that even when the DSN is specified as `https://` the `secure=true`
parameter is still required.
The plugin now uses clickhouse-go v2. If you're still using a DSN compatible
with v1 it will try to convert the DSN to the new format but as both schemata
are not fully equivalent some parameters might not work anymore. Please check
for warnings in your log file and refer to the
[v2 DSN documentation][v2-dsn-docs] for available options.
[v2-dsn-docs]: https://github.com/ClickHouse/clickhouse-go/tree/v2.30.2?tab=readme-ov-file#dsn
#### Metric type to SQL type conversion
The following configuration makes the mapping compatible with Clickhouse:
```toml
[outputs.sql.convert]
conversion_style = "literal"
integer = "Int64"
text = "String"
timestamp = "DateTime"
defaultvalue = "String"
unsigned = "UInt64"
bool = "UInt8"
real = "Float64"
```
See [ClickHouse data
types](https://clickhouse.com/docs/en/sql-reference/data-types/) for more info.
### microsoft/go-mssqldb
Telegraf doesn't have unit tests for go-mssqldb so it should be treated as
experimental.
### snowflakedb/gosnowflake
Telegraf doesn't have unit tests for gosnowflake so it should be treated as
experimental.

View file

@ -0,0 +1,74 @@
# Save metrics to an SQL Database
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
driver = ""
## Data source name
## The format of the data source name is different for each database driver.
## See the plugin readme for details.
data_source_name = ""
## Timestamp column name, set to empty to ignore the timestamp
# timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
## {TAG_COLUMN_NAMES} - tag column definitions (list of quoted identifiers)
## {TIMESTAMP_COLUMN_NAME} - the name of the time stamp column, as configured above
# table_template = "CREATE TABLE {TABLE}({COLUMNS})"
## NOTE: For the clickhouse driver the default is:
# table_template = "CREATE TABLE {TABLE}({COLUMNS}) ORDER BY ({TAG_COLUMN_NAMES}, {TIMESTAMP_COLUMN_NAME})"
## Table existence check template
## Available template variables:
## {TABLE} - tablename as a quoted identifier
# table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL
# init_sql = ""
## Maximum amount of time a connection may be idle. "0s" means connections are
## never closed due to idle time.
# connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections
## are never closed due to age.
# connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
# connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
# connection_max_open = 0
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
## plugin definition, otherwise additional config options are read as part of
## the table
## Metric type to SQL type conversion
## The values on the left are the data types Telegraf has and the values on
## the right are the data types Telegraf will use when sending to a database.
##
## The database values used must be data types the destination database
## understands. It is up to the user to ensure that the selected data type is
## available in the database they are using. Refer to your database
## documentation for what data types are available and supported.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
# ## This setting controls the behavior of the unsigned value. By default the
# ## setting will take the integer value and append the unsigned value to it. The other
# ## option is "literal", which will use the actual value the user provides to
# ## the unsigned option. This is useful for a database like ClickHouse where
# ## the unsigned value should use a value like "uint64".
# # conversion_style = "unsigned_suffix"

373
plugins/outputs/sql/sql.go Normal file
View file

@ -0,0 +1,373 @@
//go:generate ../../../tools/readme_config_includer/generator
package sql
import (
gosql "database/sql"
_ "embed"
"fmt"
"net/url"
"strconv"
"strings"
"time"
_ "github.com/ClickHouse/clickhouse-go/v2" // clickhouse
_ "github.com/go-sql-driver/mysql" // mysql
_ "github.com/jackc/pgx/v4/stdlib" // pgx (postgres)
_ "github.com/microsoft/go-mssqldb" // mssql (sql server)
_ "github.com/microsoft/go-mssqldb/integratedauth/krb5" // integrated auth for mssql
_ "github.com/snowflakedb/gosnowflake" // snowflake
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/plugins/outputs"
)
//go:embed sample.conf
var sampleConfig string
var defaultConvert = ConvertStruct{
Integer: "INT",
Real: "DOUBLE",
Text: "TEXT",
Timestamp: "TIMESTAMP",
Defaultvalue: "TEXT",
Unsigned: "UNSIGNED",
Bool: "BOOL",
ConversionStyle: "unsigned_suffix",
}
type ConvertStruct struct {
Integer string `toml:"integer"`
Real string `toml:"real"`
Text string `toml:"text"`
Timestamp string `toml:"timestamp"`
Defaultvalue string `toml:"defaultvalue"`
Unsigned string `toml:"unsigned"`
Bool string `toml:"bool"`
ConversionStyle string `toml:"conversion_style"`
}
type SQL struct {
Driver string `toml:"driver"`
DataSourceName string `toml:"data_source_name"`
TimestampColumn string `toml:"timestamp_column"`
TableTemplate string `toml:"table_template"`
TableExistsTemplate string `toml:"table_exists_template"`
InitSQL string `toml:"init_sql"`
Convert ConvertStruct `toml:"convert"`
ConnectionMaxIdleTime config.Duration `toml:"connection_max_idle_time"`
ConnectionMaxLifetime config.Duration `toml:"connection_max_lifetime"`
ConnectionMaxIdle int `toml:"connection_max_idle"`
ConnectionMaxOpen int `toml:"connection_max_open"`
Log telegraf.Logger `toml:"-"`
db *gosql.DB
tables map[string]bool
}
func (*SQL) SampleConfig() string {
return sampleConfig
}
func (p *SQL) Init() error {
// Set defaults
if p.TableExistsTemplate == "" {
p.TableExistsTemplate = "SELECT 1 FROM {TABLE} LIMIT 1"
}
if p.TableTemplate == "" {
if p.Driver == "clickhouse" {
p.TableTemplate = "CREATE TABLE {TABLE}({COLUMNS}) ORDER BY ({TAG_COLUMN_NAMES}, {TIMESTAMP_COLUMN_NAME})"
} else {
p.TableTemplate = "CREATE TABLE {TABLE}({COLUMNS})"
}
}
// Check for a valid driver
switch p.Driver {
case "clickhouse":
// Convert v1-style Clickhouse DSN to v2-style
p.convertClickHouseDsn()
case "mssql", "mysql", "pgx", "snowflake", "sqlite":
// Do nothing, those are valid
default:
return fmt.Errorf("unknown driver %q", p.Driver)
}
return nil
}
func (p *SQL) Connect() error {
db, err := gosql.Open(p.Driver, p.DataSourceName)
if err != nil {
return fmt.Errorf("creating database client failed: %w", err)
}
if err := db.Ping(); err != nil {
return fmt.Errorf("pinging database failed: %w", err)
}
db.SetConnMaxIdleTime(time.Duration(p.ConnectionMaxIdleTime))
db.SetConnMaxLifetime(time.Duration(p.ConnectionMaxLifetime))
db.SetMaxIdleConns(p.ConnectionMaxIdle)
db.SetMaxOpenConns(p.ConnectionMaxOpen)
if p.InitSQL != "" {
if _, err = db.Exec(p.InitSQL); err != nil {
return fmt.Errorf("initializing database failed: %w", err)
}
}
p.db = db
p.tables = make(map[string]bool)
return nil
}
func (p *SQL) Close() error {
return p.db.Close()
}
// Quote an identifier (table or column name)
func quoteIdent(name string) string {
return `"` + strings.ReplaceAll(sanitizeQuoted(name), `"`, `""`) + `"`
}
// Quote a string literal
func quoteStr(name string) string {
return "'" + strings.ReplaceAll(name, "'", "''") + "'"
}
func sanitizeQuoted(in string) string {
// https://dev.mysql.com/doc/refman/8.0/en/identifiers.html
// https://www.postgresql.org/docs/13/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
// Whitelist allowed characters
return strings.Map(func(r rune) rune {
switch {
case r >= '\u0001' && r <= '\uFFFF':
return r
default:
return '_'
}
}, in)
}
func (p *SQL) deriveDatatype(value interface{}) string {
var datatype string
switch value.(type) {
case int64:
datatype = p.Convert.Integer
case uint64:
if p.Convert.ConversionStyle == "unsigned_suffix" {
datatype = fmt.Sprintf("%s %s", p.Convert.Integer, p.Convert.Unsigned)
} else if p.Convert.ConversionStyle == "literal" {
datatype = p.Convert.Unsigned
} else {
p.Log.Errorf("unknown conversion style: %s", p.Convert.ConversionStyle)
}
case float64:
datatype = p.Convert.Real
case string:
datatype = p.Convert.Text
case bool:
datatype = p.Convert.Bool
default:
datatype = p.Convert.Defaultvalue
p.Log.Errorf("Unknown datatype: '%T' %v", value, value)
}
return datatype
}
func (p *SQL) generateCreateTable(metric telegraf.Metric) string {
columns := make([]string, 0, len(metric.TagList())+len(metric.FieldList())+1)
tagColumnNames := make([]string, 0, len(metric.TagList()))
if p.TimestampColumn != "" {
columns = append(columns, fmt.Sprintf("%s %s", quoteIdent(p.TimestampColumn), p.Convert.Timestamp))
}
for _, tag := range metric.TagList() {
columns = append(columns, fmt.Sprintf("%s %s", quoteIdent(tag.Key), p.Convert.Text))
tagColumnNames = append(tagColumnNames, quoteIdent(tag.Key))
}
var datatype string
for _, field := range metric.FieldList() {
datatype = p.deriveDatatype(field.Value)
columns = append(columns, fmt.Sprintf("%s %s", quoteIdent(field.Key), datatype))
}
query := p.TableTemplate
query = strings.ReplaceAll(query, "{TABLE}", quoteIdent(metric.Name()))
query = strings.ReplaceAll(query, "{TABLELITERAL}", quoteStr(metric.Name()))
query = strings.ReplaceAll(query, "{COLUMNS}", strings.Join(columns, ","))
query = strings.ReplaceAll(query, "{TAG_COLUMN_NAMES}", strings.Join(tagColumnNames, ","))
query = strings.ReplaceAll(query, "{TIMESTAMP_COLUMN_NAME}", quoteIdent(p.TimestampColumn))
return query
}
func (p *SQL) generateInsert(tablename string, columns []string) string {
placeholders := make([]string, 0, len(columns))
quotedColumns := make([]string, 0, len(columns))
for _, column := range columns {
quotedColumns = append(quotedColumns, quoteIdent(column))
}
if p.Driver == "pgx" {
// Postgres uses $1 $2 $3 as placeholders
for i := 0; i < len(columns); i++ {
placeholders = append(placeholders, fmt.Sprintf("$%d", i+1))
}
} else {
// Everything else uses ? ? ? as placeholders
for i := 0; i < len(columns); i++ {
placeholders = append(placeholders, "?")
}
}
return fmt.Sprintf("INSERT INTO %s (%s) VALUES(%s)",
quoteIdent(tablename),
strings.Join(quotedColumns, ","),
strings.Join(placeholders, ","))
}
func (p *SQL) tableExists(tableName string) bool {
stmt := strings.ReplaceAll(p.TableExistsTemplate, "{TABLE}", quoteIdent(tableName))
_, err := p.db.Exec(stmt)
return err == nil
}
func (p *SQL) Write(metrics []telegraf.Metric) error {
var err error
for _, metric := range metrics {
tablename := metric.Name()
// create table if needed
if !p.tables[tablename] && !p.tableExists(tablename) {
createStmt := p.generateCreateTable(metric)
_, err := p.db.Exec(createStmt)
if err != nil {
return err
}
}
p.tables[tablename] = true
var columns []string
var values []interface{}
if p.TimestampColumn != "" {
columns = append(columns, p.TimestampColumn)
values = append(values, metric.Time())
}
for column, value := range metric.Tags() {
columns = append(columns, column)
values = append(values, value)
}
for column, value := range metric.Fields() {
columns = append(columns, column)
values = append(values, value)
}
sql := p.generateInsert(tablename, columns)
switch p.Driver {
case "clickhouse":
// ClickHouse needs to batch inserts with prepared statements
tx, err := p.db.Begin()
if err != nil {
return fmt.Errorf("begin failed: %w", err)
}
stmt, err := tx.Prepare(sql)
if err != nil {
return fmt.Errorf("prepare failed: %w", err)
}
defer stmt.Close() //nolint:revive,gocritic // done on purpose, closing will be executed properly
_, err = stmt.Exec(values...)
if err != nil {
return fmt.Errorf("execution failed: %w", err)
}
err = tx.Commit()
if err != nil {
return fmt.Errorf("commit failed: %w", err)
}
default:
_, err = p.db.Exec(sql, values...)
if err != nil {
return fmt.Errorf("execution failed: %w", err)
}
}
}
return nil
}
// Convert a DSN possibly using v1 parameters to clickhouse-go v2 format
func (p *SQL) convertClickHouseDsn() {
u, err := url.Parse(p.DataSourceName)
if err != nil {
return
}
query := u.Query()
// Log warnings for parameters no longer supported in clickhouse-go v2
unsupported := []string{"tls_config", "no_delay", "write_timeout", "block_size", "check_connection_liveness"}
for _, paramName := range unsupported {
if query.Has(paramName) {
p.Log.Warnf("DSN parameter '%s' is no longer supported by clickhouse-go v2", paramName)
query.Del(paramName)
}
}
if query.Get("connection_open_strategy") == "time_random" {
p.Log.Warn("DSN parameter 'connection_open_strategy' can no longer be 'time_random'")
}
// Convert the read_timeout parameter to a duration string
if d := query.Get("read_timeout"); d != "" {
if _, err := strconv.ParseFloat(d, 64); err == nil {
p.Log.Warn("Legacy DSN parameter 'read_timeout' interpreted as seconds")
query.Set("read_timeout", d+"s")
}
}
// Move database to the path
if d := query.Get("database"); d != "" {
p.Log.Warn("Legacy DSN parameter 'database' converted to new format")
query.Del("database")
u.Path = d
}
// Move alt_hosts to the host part
if altHosts := query.Get("alt_hosts"); altHosts != "" {
p.Log.Warn("Legacy DSN parameter 'alt_hosts' converted to new format")
query.Del("alt_hosts")
u.Host = u.Host + "," + altHosts
}
u.RawQuery = query.Encode()
p.DataSourceName = u.String()
}
func init() {
outputs.Add("sql", func() telegraf.Output {
return &SQL{
Convert: defaultConvert,
// Allow overriding the timestamp column to empty by the user
TimestampColumn: "timestamp",
// Defaults for the connection settings (ConnectionMaxIdleTime,
// ConnectionMaxLifetime, ConnectionMaxIdle, and ConnectionMaxOpen)
// mirror the golang defaults. As of go 1.18 all of them default to 0
// except max idle connections which is 2. See
// https://pkg.go.dev/database/sql#DB.SetMaxIdleConns
ConnectionMaxIdle: 2,
}
})
}

View file

@ -0,0 +1,523 @@
package sql
import (
"bytes"
"fmt"
"io"
"os"
"path/filepath"
"testing"
"time"
"github.com/docker/go-connections/nat"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go/wait"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/testutil"
)
func stableMetric(
name string,
tags []telegraf.Tag,
fields []telegraf.Field,
tm time.Time,
tp ...telegraf.ValueType,
) telegraf.Metric {
// We want to compare the output of this plugin with expected
// output. Maps don't preserve order so comparison fails. There's
// no metric constructor that takes a slice of tag and slice of
// field, just the one that takes maps.
//
// To preserve order, construct the metric without tags and fields
// and then add them using AddTag and AddField. Those are stable.
m := metric.New(name, map[string]string{}, map[string]interface{}{}, tm, tp...)
for _, tag := range tags {
m.AddTag(tag.Key, tag.Value)
}
for _, field := range fields {
m.AddField(field.Key, field.Value)
}
return m
}
var (
// 2021-05-17T22:04:45+00:00
// or 2021-05-17T16:04:45-06:00
ts = time.Unix(1621289085, 0).UTC()
testMetrics = []telegraf.Metric{
stableMetric(
"metric_one",
[]telegraf.Tag{
{
Key: "tag_one",
Value: "tag1",
},
{
Key: "tag_two",
Value: "tag2",
},
},
[]telegraf.Field{
{
Key: "int64_one",
Value: int64(1234),
},
{
Key: "int64_two",
Value: int64(2345),
},
{
Key: "bool_one",
Value: true,
},
{
Key: "bool_two",
Value: false,
},
{
Key: "uint64_one",
Value: uint64(1000000000),
},
{
Key: "float64_one",
Value: float64(3.1415),
},
},
ts,
),
stableMetric(
"metric_two",
[]telegraf.Tag{
{
Key: "tag_three",
Value: "tag3",
},
},
[]telegraf.Field{
{
Key: "string_one",
Value: "string1",
},
},
ts,
),
stableMetric( // test spaces in metric, tag, and field names
"metric three",
[]telegraf.Tag{
{
Key: "tag four",
Value: "tag4",
},
},
[]telegraf.Field{
{
Key: "string two",
Value: "string2",
},
},
ts,
),
}
)
func TestMysqlIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
initdb, err := filepath.Abs("testdata/mariadb/initdb/script.sql")
require.NoError(t, err)
// initdb/script.sql creates this database
const dbname = "foo"
// The mariadb image lets you set the root password through an env
// var. We'll use root to insert and query test data.
const username = "root"
password := testutil.GetRandomString(32)
outDir := t.TempDir()
servicePort := "3306"
container := testutil.Container{
Image: "mariadb",
Env: map[string]string{
"MARIADB_ROOT_PASSWORD": password,
},
Files: map[string]string{
"/docker-entrypoint-initdb.d/script.sql": initdb,
"/out": outDir,
},
ExposedPorts: []string{servicePort},
WaitingFor: wait.ForAll(
wait.ForListeningPort(nat.Port(servicePort)),
wait.ForLog("mariadbd: ready for connections.").WithOccurrence(2),
),
}
require.NoError(t, container.Start(), "failed to start container")
defer container.Terminate()
// use the plugin to write to the database
address := fmt.Sprintf("%v:%v@tcp(%v:%v)/%v",
username, password, container.Address, container.Ports[servicePort], dbname,
)
p := &SQL{
Driver: "mysql",
DataSourceName: address,
Convert: defaultConvert,
InitSQL: "SET sql_mode='ANSI_QUOTES';",
TimestampColumn: "timestamp",
ConnectionMaxIdle: 2,
Log: testutil.Logger{},
}
require.NoError(t, p.Init())
require.NoError(t, p.Connect())
require.NoError(t, p.Write(testMetrics))
files := []string{
"./testdata/mariadb/expected_metric_one.sql",
"./testdata/mariadb/expected_metric_two.sql",
"./testdata/mariadb/expected_metric_three.sql",
}
for _, fn := range files {
expected, err := os.ReadFile(fn)
require.NoError(t, err)
require.Eventually(t, func() bool {
rc, out, err := container.Exec([]string{
"bash",
"-c",
"mariadb-dump --user=" + username +
" --password=" + password +
" --compact" +
" --skip-opt " +
dbname,
})
require.NoError(t, err)
require.Equal(t, 0, rc)
b, err := io.ReadAll(out)
require.NoError(t, err)
return bytes.Contains(b, expected)
}, 10*time.Second, 500*time.Millisecond, fn)
}
}
func TestPostgresIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
initdb, err := filepath.Abs("testdata/postgres/initdb/init.sql")
require.NoError(t, err)
// initdb/init.sql creates this database
const dbname = "foo"
// default username for postgres is postgres
const username = "postgres"
password := testutil.GetRandomString(32)
outDir := t.TempDir()
servicePort := "5432"
container := testutil.Container{
Image: "postgres",
Env: map[string]string{
"POSTGRES_PASSWORD": password,
},
Files: map[string]string{
"/docker-entrypoint-initdb.d/script.sql": initdb,
"/out": outDir,
},
ExposedPorts: []string{servicePort},
WaitingFor: wait.ForAll(
wait.ForListeningPort(nat.Port(servicePort)),
wait.ForLog("database system is ready to accept connections").WithOccurrence(2),
),
}
require.NoError(t, container.Start(), "failed to start container")
defer container.Terminate()
// use the plugin to write to the database
// host, port, username, password, dbname
address := fmt.Sprintf("postgres://%v:%v@%v:%v/%v",
username, password, container.Address, container.Ports[servicePort], dbname,
)
p := &SQL{
Driver: "pgx",
DataSourceName: address,
Convert: defaultConvert,
TimestampColumn: "timestamp",
ConnectionMaxIdle: 2,
Log: testutil.Logger{},
}
p.Convert.Real = "double precision"
p.Convert.Unsigned = "bigint"
p.Convert.ConversionStyle = "literal"
require.NoError(t, p.Init())
require.NoError(t, p.Connect())
defer p.Close()
require.NoError(t, p.Write(testMetrics))
require.NoError(t, p.Close())
expected, err := os.ReadFile("./testdata/postgres/expected.sql")
require.NoError(t, err)
require.Eventually(t, func() bool {
rc, out, err := container.Exec([]string{
"bash",
"-c",
"pg_dump" +
" --username=" + username +
" --no-comments" +
" " + dbname +
// pg_dump's output has comments that include build info
// of postgres and pg_dump. The build info changes with
// each release. To prevent these changes from causing the
// test to fail, we strip out comments. Also strip out
// blank lines.
"|grep -E -v '(^--|^$|^SET )'",
})
require.NoError(t, err)
require.Equal(t, 0, rc)
b, err := io.ReadAll(out)
require.NoError(t, err)
return bytes.Contains(b, expected)
}, 5*time.Second, 500*time.Millisecond)
}
func TestClickHouseIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
logConfig, err := filepath.Abs("testdata/clickhouse/enable_stdout_log.xml")
require.NoError(t, err)
initdb, err := filepath.Abs("testdata/clickhouse/initdb/init.sql")
require.NoError(t, err)
// initdb/init.sql creates this database
const dbname = "foo"
// username for connecting to clickhouse
const username = "clickhouse"
password := testutil.GetRandomString(32)
outDir := t.TempDir()
servicePort := "9000"
container := testutil.Container{
Image: "clickhouse",
ExposedPorts: []string{servicePort, "8123"},
Env: map[string]string{
"CLICKHOUSE_USER": "clickhouse",
"CLICKHOUSE_PASSWORD": password,
},
Files: map[string]string{
"/docker-entrypoint-initdb.d/script.sql": initdb,
"/etc/clickhouse-server/config.d/enable_stdout_log.xml": logConfig,
"/out": outDir,
},
WaitingFor: wait.ForAll(
wait.NewHTTPStrategy("/").WithPort(nat.Port("8123")),
wait.ForListeningPort(nat.Port(servicePort)),
wait.ForLog("Ready for connections"),
),
}
require.NoError(t, container.Start(), "failed to start container")
defer container.Terminate()
// use the plugin to write to the database
// host, port, username, password, dbname
address := fmt.Sprintf("tcp://%s:%s/%s?username=%s&password=%s",
container.Address, container.Ports[servicePort], dbname, username, password)
p := &SQL{
Driver: "clickhouse",
DataSourceName: address,
Convert: defaultConvert,
TimestampColumn: "timestamp",
ConnectionMaxIdle: 2,
Log: testutil.Logger{},
}
p.Convert.Integer = "Int64"
p.Convert.Text = "String"
p.Convert.Timestamp = "DateTime"
p.Convert.Defaultvalue = "String"
p.Convert.Unsigned = "UInt64"
p.Convert.Bool = "UInt8"
p.Convert.ConversionStyle = "literal"
require.NoError(t, p.Init())
require.NoError(t, p.Connect())
require.NoError(t, p.Write(testMetrics))
cases := []struct {
table string
expected string
}{
{"metric_one", "`float64_one` Float64"},
{"metric_two", "`string_one` String"},
{"metric three", "`string two` String"},
}
for _, tc := range cases {
require.Eventually(t, func() bool {
var out io.Reader
_, out, err = container.Exec([]string{
"bash",
"-c",
"clickhouse-client" +
" --user=" + username +
" --database=" + dbname +
" --format=TabSeparatedRaw" +
" --multiquery" +
` --query="SELECT * FROM \"` + tc.table + `\"; SHOW CREATE TABLE \"` + tc.table + `\""`,
})
require.NoError(t, err)
b, err := io.ReadAll(out)
require.NoError(t, err)
return bytes.Contains(b, []byte(tc.expected))
}, 5*time.Second, 500*time.Millisecond)
}
}
func TestClickHouseDsnConvert(t *testing.T) {
tests := []struct {
input string
expected string
}{
// Contains no incompatible settings - no change
{
"tcp://host1:1234,host2:1234/database?password=p&username=u",
"tcp://host1:1234,host2:1234/database?password=p&username=u",
},
// connection_open_strategy + read_timeout with values that are already v2 compatible
{
"tcp://host1:1234,host2:1234/database?connection_open_strategy=in_order&read_timeout=2.5s&username=u",
"tcp://host1:1234,host2:1234/database?connection_open_strategy=in_order&read_timeout=2.5s&username=u",
},
// Preserve invalid URLs
{
"://this will not parse",
"://this will not parse",
},
// Removing incompatible parameters
{
"tcp://host:1234/database?no_delay=true&username=u",
"tcp://host:1234/database?username=u",
},
// read_timeout + alt_hosts
{
"tcp://host1:1234/database?read_timeout=2.5&alt_hosts=host2:2345&username=u",
"tcp://host1:1234,host2:2345/database?read_timeout=2.5s&username=u",
},
// database
{
"tcp://host1:1234?database=db&username=u",
"tcp://host1:1234/db?username=u",
},
}
for _, tt := range tests {
plugin := &SQL{
Driver: "clickhouse",
DataSourceName: tt.input,
Log: testutil.Logger{},
}
require.NoError(t, plugin.Init())
require.Equal(t, tt.expected, plugin.DataSourceName)
}
}
func TestMysqlEmptyTimestampColumnIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
initdb, err := filepath.Abs("testdata/mariadb_no_timestamp/initdb/script.sql")
require.NoError(t, err)
// initdb/script.sql creates this database
const dbname = "foo"
// The mariadb image lets you set the root password through an env
// var. We'll use root to insert and query test data.
const username = "root"
password := testutil.GetRandomString(32)
outDir := t.TempDir()
servicePort := "3306"
container := testutil.Container{
Image: "mariadb",
Env: map[string]string{
"MARIADB_ROOT_PASSWORD": password,
},
Files: map[string]string{
"/docker-entrypoint-initdb.d/script.sql": initdb,
"/out": outDir,
},
ExposedPorts: []string{servicePort},
WaitingFor: wait.ForAll(
wait.ForListeningPort(nat.Port(servicePort)),
wait.ForLog("mariadbd: ready for connections.").WithOccurrence(2),
),
}
require.NoError(t, container.Start(), "failed to start container")
defer container.Terminate()
// use the plugin to write to the database
address := fmt.Sprintf("%v:%v@tcp(%v:%v)/%v",
username, password, container.Address, container.Ports[servicePort], dbname,
)
p := &SQL{
Driver: "mysql",
DataSourceName: address,
Convert: defaultConvert,
InitSQL: "SET sql_mode='ANSI_QUOTES';",
ConnectionMaxIdle: 2,
Log: testutil.Logger{},
}
require.NoError(t, p.Init())
require.NoError(t, p.Connect())
require.NoError(t, p.Write(testMetrics))
files := []string{
"./testdata/mariadb_no_timestamp/expected_metric_one.sql",
"./testdata/mariadb_no_timestamp/expected_metric_two.sql",
"./testdata/mariadb_no_timestamp/expected_metric_three.sql",
}
for _, fn := range files {
expected, err := os.ReadFile(fn)
require.NoError(t, err)
require.Eventually(t, func() bool {
rc, out, err := container.Exec([]string{
"bash",
"-c",
"mariadb-dump --user=" + username +
" --password=" + password +
" --compact" +
" --skip-opt " +
dbname,
})
require.NoError(t, err)
require.Equal(t, 0, rc)
b, err := io.ReadAll(out)
require.NoError(t, err)
return bytes.Contains(b, expected)
}, 10*time.Second, 500*time.Millisecond, fn)
}
}

View file

@ -0,0 +1,10 @@
//go:build !mips && !mipsle && !mips64 && !ppc64 && !riscv64 && !loong64 && !mips64le && !(windows && (386 || arm))
package sql
// The modernc.org sqlite driver isn't supported on all
// platforms. Register it with build constraints to prevent build
// failures on unsupported platforms.
import (
_ "modernc.org/sqlite" // Register sqlite sql driver
)

View file

@ -0,0 +1,135 @@
//go:build !mips && !mipsle && !mips64 && !ppc64 && !riscv64 && !loong64 && !mips64le && !(windows && (386 || arm))
package sql
import (
gosql "database/sql"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf/testutil"
)
func TestSqlite(t *testing.T) {
dbfile := filepath.Join(t.TempDir(), "db")
defer os.Remove(dbfile)
// Use the plugin to write to the database address :=
// fmt.Sprintf("file:%v", dbfile)
address := dbfile // accepts a path or a file: URI
p := &SQL{
Driver: "sqlite",
DataSourceName: address,
Convert: defaultConvert,
TimestampColumn: "timestamp",
ConnectionMaxIdle: 2,
Log: testutil.Logger{},
}
require.NoError(t, p.Init())
require.NoError(t, p.Connect())
defer p.Close()
require.NoError(t, p.Write(testMetrics))
// read directly from the database
db, err := gosql.Open("sqlite", address)
require.NoError(t, err)
defer db.Close()
var countMetricOne int
require.NoError(t, db.QueryRow("select count(*) from metric_one").Scan(&countMetricOne))
require.Equal(t, 1, countMetricOne)
var countMetricTwo int
require.NoError(t, db.QueryRow("select count(*) from metric_two").Scan(&countMetricTwo))
require.Equal(t, 1, countMetricTwo)
var rows *gosql.Rows
// Check that tables were created as expected
rows, err = db.Query("select sql from sqlite_master")
require.NoError(t, err)
defer rows.Close()
var sql string
require.True(t, rows.Next())
require.NoError(t, rows.Scan(&sql))
require.Equal(t,
`CREATE TABLE "metric_one"("timestamp" TIMESTAMP,"tag_one" TEXT,"tag_two" TEXT,"int64_one" INT,`+
`"int64_two" INT,"bool_one" BOOL,"bool_two" BOOL,"uint64_one" INT UNSIGNED,"float64_one" DOUBLE)`,
sql,
)
require.True(t, rows.Next())
require.NoError(t, rows.Scan(&sql))
require.Equal(t,
`CREATE TABLE "metric_two"("timestamp" TIMESTAMP,"tag_three" TEXT,"string_one" TEXT)`,
sql,
)
require.True(t, rows.Next())
require.NoError(t, rows.Scan(&sql))
require.Equal(t,
`CREATE TABLE "metric three"("timestamp" TIMESTAMP,"tag four" TEXT,"string two" TEXT)`,
sql,
)
require.False(t, rows.Next())
// sqlite stores dates as strings. They may be in the local
// timezone. The test needs to parse them back into a time.Time to
// check them.
// timeLayout := "2006-01-02 15:04:05 -0700 MST"
timeLayout := "2006-01-02T15:04:05Z"
var actualTime time.Time
// Check contents of tables
rows2, err := db.Query("select timestamp, tag_one, tag_two, int64_one, int64_two from metric_one")
require.NoError(t, err)
defer rows2.Close()
require.True(t, rows2.Next())
var (
a string
b, c string
d, e int64
)
require.NoError(t, rows2.Scan(&a, &b, &c, &d, &e))
actualTime, err = time.Parse(timeLayout, a)
require.NoError(t, err)
require.Equal(t, ts, actualTime.UTC())
require.Equal(t, "tag1", b)
require.Equal(t, "tag2", c)
require.Equal(t, int64(1234), d)
require.Equal(t, int64(2345), e)
require.False(t, rows2.Next())
rows3, err := db.Query("select timestamp, tag_three, string_one from metric_two")
require.NoError(t, err)
defer rows3.Close()
require.True(t, rows3.Next())
var (
f, g, h string
)
require.NoError(t, rows3.Scan(&f, &g, &h))
actualTime, err = time.Parse(timeLayout, f)
require.NoError(t, err)
require.Equal(t, ts, actualTime.UTC())
require.Equal(t, "tag3", g)
require.Equal(t, "string1", h)
require.False(t, rows3.Next())
rows4, err := db.Query(`select timestamp, "tag four", "string two" from "metric three"`)
require.NoError(t, err)
defer rows4.Close()
require.True(t, rows4.Next())
var (
i, j, k string
)
require.NoError(t, rows4.Scan(&i, &j, &k))
actualTime, err = time.Parse(timeLayout, i)
require.NoError(t, err)
require.Equal(t, ts, actualTime.UTC())
require.Equal(t, "tag4", j)
require.Equal(t, "string2", k)
require.False(t, rows4.Next())
}

View file

@ -0,0 +1,5 @@
<clickhouse>
<logger>
<console>1</console>
</logger>
</clickhouse>

View file

@ -0,0 +1,36 @@
2021-05-17 22:04:45 tag1 tag2 1234 2345 1 0 1000000000 3.1415
CREATE TABLE foo.metric_one
(
`timestamp` DateTime,
`tag_one` String,
`tag_two` String,
`int64_one` Int64,
`int64_two` Int64,
`bool_one` UInt8,
`bool_two` UInt8,
`uint64_one` UInt64,
`float64_one` Float64
)
ENGINE = MergeTree
ORDER BY timestamp
SETTINGS index_granularity = 8192
2021-05-17 22:04:45 tag3 string1
CREATE TABLE foo.metric_two
(
`timestamp` DateTime,
`tag_three` String,
`string_one` String
)
ENGINE = MergeTree
ORDER BY timestamp
SETTINGS index_granularity = 8192
2021-05-17 22:04:45 tag4 string2
CREATE TABLE foo.`metric three`
(
`timestamp` DateTime,
`tag four` String,
`string two` String
)
ENGINE = MergeTree
ORDER BY timestamp
SETTINGS index_granularity = 8192

View file

@ -0,0 +1 @@
CREATE DATABASE foo;

View file

@ -0,0 +1,14 @@
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `metric_one` (
`timestamp` timestamp NULL DEFAULT NULL,
`tag_one` text DEFAULT NULL,
`tag_two` text DEFAULT NULL,
`int64_one` int(11) DEFAULT NULL,
`int64_two` int(11) DEFAULT NULL,
`bool_one` tinyint(1) DEFAULT NULL,
`bool_two` tinyint(1) DEFAULT NULL,
`uint64_one` int(10) unsigned DEFAULT NULL,
`float64_one` double DEFAULT NULL
);
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `metric_one` VALUES ('2021-05-17 22:04:45','tag1','tag2',1234,2345,1,0,1000000000,3.1415);

View file

@ -0,0 +1,8 @@
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `metric three` (
`timestamp` timestamp NULL DEFAULT NULL,
`tag four` text DEFAULT NULL,
`string two` text DEFAULT NULL
);
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `metric three` VALUES ('2021-05-17 22:04:45','tag4','string2');

View file

@ -0,0 +1,8 @@
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `metric_two` (
`timestamp` timestamp NULL DEFAULT NULL,
`tag_three` text DEFAULT NULL,
`string_one` text DEFAULT NULL
);
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `metric_two` VALUES ('2021-05-17 22:04:45','tag3','string1');

View file

@ -0,0 +1,4 @@
create database foo;
use foo;
create table bar (baz int);
insert into bar (baz) values (1);

View file

@ -0,0 +1,13 @@
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `metric_one` (
`tag_one` text DEFAULT NULL,
`tag_two` text DEFAULT NULL,
`int64_one` int(11) DEFAULT NULL,
`int64_two` int(11) DEFAULT NULL,
`bool_one` tinyint(1) DEFAULT NULL,
`bool_two` tinyint(1) DEFAULT NULL,
`uint64_one` int(10) unsigned DEFAULT NULL,
`float64_one` double DEFAULT NULL
);
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `metric_one` VALUES ('tag1','tag2',1234,2345,1,0,1000000000,3.1415);

View file

@ -0,0 +1,7 @@
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `metric three` (
`tag four` text DEFAULT NULL,
`string two` text DEFAULT NULL
);
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `metric three` VALUES ('tag4','string2');

View file

@ -0,0 +1,7 @@
/*!40101 SET character_set_client = utf8mb4 */;
CREATE TABLE `metric_two` (
`tag_three` text DEFAULT NULL,
`string_one` text DEFAULT NULL
);
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `metric_two` VALUES ('tag3','string1');

View file

@ -0,0 +1,4 @@
create database foo;
use foo;
create table bar (baz int);
insert into bar (baz) values (1);

View file

@ -0,0 +1,34 @@
SELECT pg_catalog.set_config('search_path', '', false);
CREATE TABLE public."metric three" (
"timestamp" timestamp without time zone,
"tag four" text,
"string two" text
);
ALTER TABLE public."metric three" OWNER TO postgres;
CREATE TABLE public.metric_one (
"timestamp" timestamp without time zone,
tag_one text,
tag_two text,
int64_one integer,
int64_two integer,
bool_one boolean,
bool_two boolean,
uint64_one bigint,
float64_one double precision
);
ALTER TABLE public.metric_one OWNER TO postgres;
CREATE TABLE public.metric_two (
"timestamp" timestamp without time zone,
tag_three text,
string_one text
);
ALTER TABLE public.metric_two OWNER TO postgres;
COPY public."metric three" ("timestamp", "tag four", "string two") FROM stdin;
2021-05-17 22:04:45 tag4 string2
\.
COPY public.metric_one ("timestamp", tag_one, tag_two, int64_one, int64_two, bool_one, bool_two, uint64_one, float64_one) FROM stdin;
2021-05-17 22:04:45 tag1 tag2 1234 2345 t f 1000000000 3.1415
\.
COPY public.metric_two ("timestamp", tag_three, string_one) FROM stdin;
2021-05-17 22:04:45 tag3 string1
\.

View file

@ -0,0 +1,2 @@
create database foo;