1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,317 @@
# PostgreSQL Extensible Input Plugin
This postgresql plugin provides metrics for your postgres database. It has been
designed to parse SQL queries in the plugin section of your `telegraf.conf`.
The example below has two queries are specified, with the following parameters:
* The SQL query itself
* The minimum PostgreSQL version supported (the numeric display visible in pg_settings)
* A boolean to define if the query has to be run against some specific database (defined in the `databases` variable of the plugin section)
* The name of the measurement
* A list of the columns to be defined as tags
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Secret-store support
This plugin supports secrets from secret-stores for the `address` option.
See the [secret-store documentation][SECRETSTORE] for more details on how
to use them.
[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
## Configuration
```toml @sample.conf
# Read metrics from one or many postgresql servers
[[inputs.postgresql_extensible]]
# specify address via a url matching:
# postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=...&statement_timeout=...
# or a simple string:
# host=localhost port=5432 user=pqgotest password=... sslmode=... dbname=app_production
#
# All connection parameters are optional.
# Without the dbname parameter, the driver will default to a database
# with the same name as the user. This dbname is just for instantiating a
# connection with the server and doesn't restrict the databases we are trying
# to grab metrics for.
#
address = "host=localhost user=postgres sslmode=disable"
## Whether to use prepared statements when connecting to the database.
## This should be set to false when connecting through a PgBouncer instance
## with pool_mode set to transaction.
prepared_statements = true
# Define the toml config where the sql queries are stored
# The script option can be used to specify the .sql file path.
# If script and sqlquery options specified at same time, sqlquery will be used
#
# the measurement field defines measurement name for metrics produced
# by the query. Default is "postgresql".
#
# the tagvalue field is used to define custom tags (separated by comas).
# the query is expected to return columns which match the names of the
# defined tags. The values in these columns must be of a string-type,
# a number-type or a blob-type.
#
# The timestamp field is used to override the data points timestamp value. By
# default, all rows inserted with current time. By setting a timestamp column,
# the row will be inserted with that column's value.
#
# The min_version field specifies minimal database version this query
# will run on.
#
# The max_version field when set specifies maximal database version
# this query will NOT run on.
#
# Database version in `minversion` and `maxversion` is represented as
# a single integer without last component, for example:
# 9.6.2 -> 906
# 15.2 -> 1500
#
# Structure :
# [[inputs.postgresql_extensible.query]]
# measurement string
# sqlquery string
# min_version int
# max_version int
# withdbname boolean
# tagvalue string (coma separated)
# timestamp string
[[inputs.postgresql_extensible.query]]
measurement="pg_stat_database"
sqlquery="SELECT * FROM pg_stat_database WHERE datname"
min_version=901
tagvalue=""
[[inputs.postgresql_extensible.query]]
script="your_sql-filepath.sql"
min_version=901
max_version=1300
tagvalue=""
```
The system can be easily extended using homemade metrics collection tools or
using postgresql extensions ([pg_stat_statements][1], [pg_proctab][2] or
[powa][3])
[1]: http://www.postgresql.org/docs/current/static/pgstatstatements.html
[2]: https://github.com/markwkm/pg_proctab
[3]: http://dalibo.github.io/powa/
## Sample Queries
* telegraf.conf postgresql_extensible queries (assuming that you have configured
correctly your connection)
```toml
[[inputs.postgresql_extensible.query]]
sqlquery="SELECT * FROM pg_stat_database"
version=901
withdbname=false
tagvalue=""
[[inputs.postgresql_extensible.query]]
sqlquery="SELECT * FROM pg_stat_bgwriter"
version=901
withdbname=false
tagvalue=""
[[inputs.postgresql_extensible.query]]
sqlquery="select * from sessions"
version=901
withdbname=false
tagvalue="db,username,state"
[[inputs.postgresql_extensible.query]]
sqlquery="select setting as max_connections from pg_settings where \
name='max_connections'"
version=801
withdbname=false
tagvalue=""
[[inputs.postgresql_extensible.query]]
sqlquery="select * from pg_stat_kcache"
version=901
withdbname=false
tagvalue=""
[[inputs.postgresql_extensible.query]]
sqlquery="select setting as shared_buffers from pg_settings where \
name='shared_buffers'"
version=801
withdbname=false
tagvalue=""
[[inputs.postgresql_extensible.query]]
sqlquery="SELECT db, count( distinct blocking_pid ) AS num_blocking_sessions,\
count( distinct blocked_pid) AS num_blocked_sessions FROM \
public.blocking_procs group by db"
version=901
withdbname=false
tagvalue="db"
[[inputs.postgresql_extensible.query]]
sqlquery="""
SELECT type, (enabled || '') AS enabled, COUNT(*)
FROM application_users
GROUP BY type, enabled
"""
version=901
withdbname=false
tagvalue="type,enabled"
```
## Postgresql Side
postgresql.conf :
```sql
shared_preload_libraries = 'pg_stat_statements,pg_stat_kcache'
```
Please follow the requirements to setup those extensions.
In the database (can be a specific monitoring db)
```sql
create extension pg_stat_statements;
create extension pg_stat_kcache;
create extension pg_proctab;
```
(assuming that the extension is installed on the OS Layer)
* pg_stat_kcache is available on the postgresql.org yum repo
* pg_proctab is available at : <https://github.com/markwkm/pg_proctab>
## Views
* Blocking sessions
```sql
CREATE OR REPLACE VIEW public.blocking_procs AS
SELECT a.datname AS db,
kl.pid AS blocking_pid,
ka.usename AS blocking_user,
ka.query AS blocking_query,
bl.pid AS blocked_pid,
a.usename AS blocked_user,
a.query AS blocked_query,
to_char(age(now(), a.query_start), 'HH24h:MIm:SSs'::text) AS age
FROM pg_locks bl
JOIN pg_stat_activity a ON bl.pid = a.pid
JOIN pg_locks kl ON bl.locktype = kl.locktype AND NOT bl.database IS
DISTINCT FROM kl.database AND NOT bl.relation IS DISTINCT FROM kl.relation
AND NOT bl.page IS DISTINCT FROM kl.page AND NOT bl.tuple IS DISTINCT FROM
kl.tuple AND NOT bl.virtualxid IS DISTINCT FROM kl.virtualxid AND NOT
bl.transactionid IS DISTINCT FROM kl.transactionid AND NOT bl.classid IS
DISTINCT FROM kl.classid AND NOT bl.objid IS DISTINCT FROM kl.objid AND
NOT bl.objsubid IS DISTINCT FROM kl.objsubid AND bl.pid <> kl.pid
JOIN pg_stat_activity ka ON kl.pid = ka.pid
WHERE kl.granted AND NOT bl.granted
ORDER BY a.query_start;
```
* Sessions Statistics
```sql
CREATE OR REPLACE VIEW public.sessions AS
WITH proctab AS (
SELECT pg_proctab.pid,
CASE
WHEN pg_proctab.state::text = 'R'::bpchar::text
THEN 'running'::text
WHEN pg_proctab.state::text = 'D'::bpchar::text
THEN 'sleep-io'::text
WHEN pg_proctab.state::text = 'S'::bpchar::text
THEN 'sleep-waiting'::text
WHEN pg_proctab.state::text = 'Z'::bpchar::text
THEN 'zombie'::text
WHEN pg_proctab.state::text = 'T'::bpchar::text
THEN 'stopped'::text
ELSE NULL::text
END AS proc_state,
pg_proctab.ppid,
pg_proctab.utime,
pg_proctab.stime,
pg_proctab.vsize,
pg_proctab.rss,
pg_proctab.processor,
pg_proctab.rchar,
pg_proctab.wchar,
pg_proctab.syscr,
pg_proctab.syscw,
pg_proctab.reads,
pg_proctab.writes,
pg_proctab.cwrites
FROM pg_proctab() pg_proctab(pid, comm, fullcomm, state, ppid, pgrp,
session, tty_nr, tpgid, flags, minflt, cminflt, majflt, cmajflt,
utime, stime, cutime, cstime, priority, nice, num_threads,
itrealvalue, starttime, vsize, rss, exit_signal, processor,
rt_priority, policy, delayacct_blkio_ticks, uid, username, rchar,
wchar, syscr, syscw, reads, writes, cwrites)
), stat_activity AS (
SELECT pg_stat_activity.datname,
pg_stat_activity.pid,
pg_stat_activity.usename,
CASE
WHEN pg_stat_activity.query IS NULL THEN 'no query'::text
WHEN pg_stat_activity.query IS NOT NULL AND
pg_stat_activity.state = 'idle'::text THEN 'no query'::text
ELSE regexp_replace(pg_stat_activity.query, '[\n\r]+'::text,
' '::text, 'g'::text)
END AS query
FROM pg_stat_activity
)
SELECT stat.datname::name AS db,
stat.usename::name AS username,
stat.pid,
proc.proc_state::text AS state,
('"'::text || stat.query) || '"'::text AS query,
(proc.utime/1000)::bigint AS session_usertime,
(proc.stime/1000)::bigint AS session_systemtime,
proc.vsize AS session_virtual_memory_size,
proc.rss AS session_resident_memory_size,
proc.processor AS session_processor_number,
proc.rchar AS session_bytes_read,
proc.rchar-proc.reads AS session_logical_bytes_read,
proc.wchar AS session_bytes_written,
proc.wchar-proc.writes AS session_logical_bytes_writes,
proc.syscr AS session_read_io,
proc.syscw AS session_write_io,
proc.reads AS session_physical_reads,
proc.writes AS session_physical_writes,
proc.cwrites AS session_cancel_writes
FROM proctab proc,
stat_activity stat
WHERE proc.pid = stat.pid;
```
## Example Output
The example out below was taken by running the query
```sql
select count(*)*100 / (select cast(nullif(setting, '') AS integer) from pg_settings where name='max_connections') as percentage_of_used_cons from pg_stat_activity
```
Which generates the following
```text
postgresql,db=postgres,server=dbname\=postgres\ host\=localhost\ port\=5432\ statement_timeout\=10000\ user\=postgres percentage_of_used_cons=6i 1672400531000000000
```
## Metrics
The metrics collected by this input plugin will depend on the configured query.
By default, the following format will be used
* postgresql
* tags:
* db
* server

View file

@ -0,0 +1,240 @@
//go:generate ../../../tools/readme_config_includer/generator
package postgresql_extensible
import (
"bytes"
_ "embed"
"fmt"
"os"
"strings"
"time"
// Required for SQL framework driver
_ "github.com/jackc/pgx/v4/stdlib"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/common/postgresql"
"github.com/influxdata/telegraf/plugins/inputs"
)
//go:embed sample.conf
var sampleConfig string
var ignoredColumns = map[string]bool{"stats_reset": true}
type Postgresql struct {
Databases []string `deprecated:"1.22.4;use the sqlquery option to specify database to use"`
Query []query `toml:"query"`
PreparedStatements bool `toml:"prepared_statements"`
Log telegraf.Logger `toml:"-"`
postgresql.Config
service *postgresql.Service
}
type query struct {
Sqlquery string `toml:"sqlquery"`
Script string `toml:"script"`
Version int `deprecated:"1.28.0;use minVersion to specify minimal DB version this query supports"`
MinVersion int `toml:"min_version"`
MaxVersion int `toml:"max_version"`
Withdbname bool `deprecated:"1.22.4;use the sqlquery option to specify database to use"`
Tagvalue string `toml:"tagvalue"`
Measurement string `toml:"measurement"`
Timestamp string `toml:"timestamp"`
additionalTags map[string]bool
}
type scanner interface {
Scan(dest ...interface{}) error
}
func (*Postgresql) SampleConfig() string {
return sampleConfig
}
func (p *Postgresql) Init() error {
// Set defaults for the queries
for i, q := range p.Query {
if q.Sqlquery == "" {
query, err := os.ReadFile(q.Script)
if err != nil {
return err
}
q.Sqlquery = string(query)
}
if q.MinVersion == 0 {
q.MinVersion = q.Version
}
if q.Measurement == "" {
q.Measurement = "postgresql"
}
var queryAddon string
if q.Withdbname {
if len(p.Databases) != 0 {
queryAddon = fmt.Sprintf(` IN ('%s')`, strings.Join(p.Databases, "','"))
} else {
queryAddon = " is not null"
}
}
q.Sqlquery += queryAddon
q.additionalTags = make(map[string]bool)
if q.Tagvalue != "" {
for _, tag := range strings.Split(q.Tagvalue, ",") {
q.additionalTags[tag] = true
}
}
p.Query[i] = q
}
p.Config.IsPgBouncer = !p.PreparedStatements
// Create a service to access the PostgreSQL server
service, err := p.Config.CreateService()
if err != nil {
return err
}
p.service = service
return nil
}
func (p *Postgresql) Start(_ telegraf.Accumulator) error {
return p.service.Start()
}
func (p *Postgresql) Gather(acc telegraf.Accumulator) error {
// Retrieving the database version
query := `SELECT setting::integer / 100 AS version FROM pg_settings WHERE name = 'server_version_num'`
var dbVersion int
if err := p.service.DB.QueryRow(query).Scan(&dbVersion); err != nil {
dbVersion = 0
}
// set default timestamp to Now and use for all generated metrics during
// the same Gather call
timestamp := time.Now()
// We loop in order to process each query
// Query is not run if Database version does not match the query version.
for _, q := range p.Query {
if q.MinVersion <= dbVersion && (q.MaxVersion == 0 || q.MaxVersion > dbVersion) {
acc.AddError(p.gatherMetricsFromQuery(acc, q, timestamp))
}
}
return nil
}
func (p *Postgresql) Stop() {
p.service.Stop()
}
func (p *Postgresql) gatherMetricsFromQuery(acc telegraf.Accumulator, q query, timestamp time.Time) error {
rows, err := p.service.DB.Query(q.Sqlquery)
if err != nil {
return err
}
defer rows.Close()
// grab the column information from the result
columns, err := rows.Columns()
if err != nil {
return err
}
for rows.Next() {
if err := p.accRow(acc, rows, columns, q, timestamp); err != nil {
return err
}
}
return nil
}
func (p *Postgresql) accRow(acc telegraf.Accumulator, row scanner, columns []string, q query, timestamp time.Time) error {
// this is where we'll store the column name with its *interface{}
columnMap := make(map[string]*interface{})
for _, column := range columns {
columnMap[column] = new(interface{})
}
columnVars := make([]interface{}, 0, len(columnMap))
// populate the array of interface{} with the pointers in the right order
for i := 0; i < len(columnMap); i++ {
columnVars = append(columnVars, columnMap[columns[i]])
}
// deconstruct array of variables and send to Scan
if err := row.Scan(columnVars...); err != nil {
return err
}
var dbname bytes.Buffer
if c, ok := columnMap["datname"]; ok && *c != nil {
// extract the database name from the column map
switch datname := (*c).(type) {
case string:
dbname.WriteString(datname)
default:
dbname.WriteString(p.service.ConnectionDatabase)
}
} else {
dbname.WriteString(p.service.ConnectionDatabase)
}
// Process the additional tags
tags := map[string]string{
"server": p.service.SanitizedAddress,
"db": dbname.String(),
}
fields := make(map[string]interface{})
for col, val := range columnMap {
p.Log.Debugf("Column: %s = %T: %v\n", col, *val, *val)
_, ignore := ignoredColumns[col]
if ignore || *val == nil {
continue
}
if col == q.Timestamp {
if v, ok := (*val).(time.Time); ok {
timestamp = v
}
continue
}
if q.additionalTags[col] {
v, err := internal.ToString(*val)
if err != nil {
p.Log.Debugf("Failed to add %q as additional tag: %v", col, err)
} else {
tags[col] = v
}
continue
}
if v, ok := (*val).([]byte); ok {
fields[col] = string(v)
} else {
fields[col] = *val
}
}
acc.AddFields(q.Measurement, fields, tags, timestamp)
return nil
}
func init() {
inputs.Add("postgresql_extensible", func() telegraf.Input {
return &Postgresql{
Config: postgresql.Config{
MaxIdle: 1,
MaxOpen: 1,
},
PreparedStatements: true,
}
})
}

View file

@ -0,0 +1,368 @@
package postgresql_extensible
import (
"errors"
"fmt"
"testing"
"time"
"github.com/docker/go-connections/nat"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go/wait"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/plugins/common/postgresql"
"github.com/influxdata/telegraf/testutil"
)
func queryRunner(t *testing.T, q []query) *testutil.Accumulator {
servicePort := "5432"
container := testutil.Container{
Image: "postgres:alpine",
ExposedPorts: []string{servicePort},
Env: map[string]string{
"POSTGRES_HOST_AUTH_METHOD": "trust",
},
WaitingFor: wait.ForAll(
wait.ForLog("database system is ready to accept connections").WithOccurrence(2),
wait.ForListeningPort(nat.Port(servicePort)),
),
}
require.NoError(t, container.Start(), "failed to start container")
defer container.Terminate()
addr := fmt.Sprintf(
"host=%s port=%s user=postgres sslmode=disable",
container.Address,
container.Ports[servicePort],
)
p := &Postgresql{
Log: testutil.Logger{},
Config: postgresql.Config{
Address: config.NewSecret([]byte(addr)),
IsPgBouncer: false,
},
Databases: []string{"postgres"},
Query: q,
}
require.NoError(t, p.Init())
var acc testutil.Accumulator
require.NoError(t, p.Start(&acc))
defer p.Stop()
require.NoError(t, acc.GatherError(p.Gather))
return &acc
}
func TestPostgresqlGeneratesMetricsIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
acc := queryRunner(t, []query{{
Sqlquery: "select * from pg_stat_database",
MinVersion: 901,
Withdbname: false,
Tagvalue: "",
}})
testutil.PrintMetrics(acc.GetTelegrafMetrics())
intMetrics := []string{
"xact_commit",
"xact_rollback",
"blks_read",
"blks_hit",
"tup_returned",
"tup_fetched",
"tup_inserted",
"tup_updated",
"tup_deleted",
"conflicts",
"temp_files",
"temp_bytes",
"deadlocks",
"numbackends",
"datid",
}
var int32Metrics []string
floatMetrics := []string{
"blk_read_time",
"blk_write_time",
}
stringMetrics := []string{
"datname",
}
metricsCounted := 0
for _, metric := range intMetrics {
require.True(t, acc.HasInt64Field("postgresql", metric))
metricsCounted++
}
for _, metric := range int32Metrics {
require.True(t, acc.HasInt32Field("postgresql", metric))
metricsCounted++
}
for _, metric := range floatMetrics {
require.True(t, acc.HasFloatField("postgresql", metric))
metricsCounted++
}
for _, metric := range stringMetrics {
require.True(t, acc.HasStringField("postgresql", metric))
metricsCounted++
}
require.Positive(t, metricsCounted)
require.Equal(t, len(floatMetrics)+len(intMetrics)+len(int32Metrics)+len(stringMetrics), metricsCounted)
}
func TestPostgresqlQueryOutputTestsIntegration(t *testing.T) {
const measurement = "postgresql"
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
examples := map[string]func(*testutil.Accumulator){
"SELECT 10.0::float AS myvalue": func(acc *testutil.Accumulator) {
v, found := acc.FloatField(measurement, "myvalue")
require.True(t, found)
require.InDelta(t, 10.0, v, testutil.DefaultDelta)
},
"SELECT 10.0 AS myvalue": func(acc *testutil.Accumulator) {
v, found := acc.StringField(measurement, "myvalue")
require.True(t, found)
require.Equal(t, "10.0", v)
},
"SELECT 'hello world' AS myvalue": func(acc *testutil.Accumulator) {
v, found := acc.StringField(measurement, "myvalue")
require.True(t, found)
require.Equal(t, "hello world", v)
},
"SELECT true AS myvalue": func(acc *testutil.Accumulator) {
v, found := acc.BoolField(measurement, "myvalue")
require.True(t, found)
require.True(t, v)
},
"SELECT timestamp'1980-07-23' as ts, true AS myvalue": func(acc *testutil.Accumulator) {
expectedTime := time.Date(1980, 7, 23, 0, 0, 0, 0, time.UTC)
v, found := acc.BoolField(measurement, "myvalue")
require.True(t, found)
require.True(t, v)
require.True(t, acc.HasTimestamp(measurement, expectedTime))
},
}
for q, assertions := range examples {
acc := queryRunner(t, []query{{
Sqlquery: q,
MinVersion: 901,
Withdbname: false,
Tagvalue: "",
Timestamp: "ts",
}})
assertions(acc)
}
}
func TestPostgresqlFieldOutputIntegration(t *testing.T) {
const measurement = "postgresql"
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
acc := queryRunner(t, []query{{
Sqlquery: "select * from pg_stat_database",
MinVersion: 901,
Withdbname: false,
Tagvalue: "",
}})
intMetrics := []string{
"xact_commit",
"xact_rollback",
"blks_read",
"blks_hit",
"tup_returned",
"tup_fetched",
"tup_inserted",
"tup_updated",
"tup_deleted",
"conflicts",
"temp_files",
"temp_bytes",
"deadlocks",
"numbackends",
"datid",
}
var int32Metrics []string
floatMetrics := []string{
"blk_read_time",
"blk_write_time",
}
stringMetrics := []string{
"datname",
}
for _, field := range intMetrics {
_, found := acc.Int64Field(measurement, field)
require.Truef(t, found, "expected %s to be an integer", field)
}
for _, field := range int32Metrics {
_, found := acc.Int32Field(measurement, field)
require.Truef(t, found, "expected %s to be an int32", field)
}
for _, field := range floatMetrics {
_, found := acc.FloatField(measurement, field)
require.Truef(t, found, "expected %s to be a float64", field)
}
for _, field := range stringMetrics {
_, found := acc.StringField(measurement, field)
require.Truef(t, found, "expected %s to be a str", field)
}
}
func TestPostgresqlSqlScript(t *testing.T) {
q := []query{{
Script: "testdata/test.sql",
MinVersion: 901,
Withdbname: false,
Tagvalue: "",
}}
addr := fmt.Sprintf(
"host=%s user=postgres sslmode=disable",
testutil.GetLocalHost(),
)
p := &Postgresql{
Log: testutil.Logger{},
Config: postgresql.Config{
Address: config.NewSecret([]byte(addr)),
IsPgBouncer: false,
},
Databases: []string{"postgres"},
Query: q,
}
require.NoError(t, p.Init())
var acc testutil.Accumulator
require.NoError(t, p.Start(&acc))
defer p.Stop()
require.NoError(t, acc.GatherError(p.Gather))
}
func TestPostgresqlIgnoresUnwantedColumnsIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
addr := fmt.Sprintf(
"host=%s user=postgres sslmode=disable",
testutil.GetLocalHost(),
)
p := &Postgresql{
Log: testutil.Logger{},
Config: postgresql.Config{
Address: config.NewSecret([]byte(addr)),
},
}
require.NoError(t, p.Init())
var acc testutil.Accumulator
require.NoError(t, p.Start(&acc))
defer p.Stop()
require.NoError(t, acc.GatherError(p.Gather))
require.NotEmpty(t, ignoredColumns)
for col := range ignoredColumns {
require.False(t, acc.HasMeasurement(col))
}
}
func TestAccRow(t *testing.T) {
p := Postgresql{
Log: testutil.Logger{},
Config: postgresql.Config{
Address: config.NewSecret(nil),
OutputAddress: "server",
},
}
require.NoError(t, p.Init())
var acc testutil.Accumulator
columns := []string{"datname", "cat"}
tests := []struct {
fields fakeRow
dbName string
server string
}{
{
fields: fakeRow{
fields: []interface{}{1, "gato"},
},
dbName: "postgres",
server: "server",
},
{
fields: fakeRow{
fields: []interface{}{nil, "gato"},
},
dbName: "postgres",
server: "server",
},
{
fields: fakeRow{
fields: []interface{}{"name", "gato"},
},
dbName: "name",
server: "server",
},
}
for _, tt := range tests {
q := query{Measurement: "pgTEST", additionalTags: make(map[string]bool)}
require.NoError(t, p.accRow(&acc, tt.fields, columns, q, time.Now()))
require.Len(t, acc.Metrics, 1)
metric := acc.Metrics[0]
require.Equal(t, tt.dbName, metric.Tags["db"])
require.Equal(t, tt.server, metric.Tags["server"])
acc.ClearMetrics()
}
}
type fakeRow struct {
fields []interface{}
}
func (f fakeRow) Scan(dest ...interface{}) error {
if len(f.fields) != len(dest) {
return errors.New("nada matchy buddy")
}
for i, d := range dest {
switch d := d.(type) {
case *interface{}:
*d = f.fields[i]
default:
return fmt.Errorf("bad type %T", d)
}
}
return nil
}

View file

@ -0,0 +1,66 @@
# Read metrics from one or many postgresql servers
[[inputs.postgresql_extensible]]
# specify address via a url matching:
# postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=...&statement_timeout=...
# or a simple string:
# host=localhost port=5432 user=pqgotest password=... sslmode=... dbname=app_production
#
# All connection parameters are optional.
# Without the dbname parameter, the driver will default to a database
# with the same name as the user. This dbname is just for instantiating a
# connection with the server and doesn't restrict the databases we are trying
# to grab metrics for.
#
address = "host=localhost user=postgres sslmode=disable"
## Whether to use prepared statements when connecting to the database.
## This should be set to false when connecting through a PgBouncer instance
## with pool_mode set to transaction.
prepared_statements = true
# Define the toml config where the sql queries are stored
# The script option can be used to specify the .sql file path.
# If script and sqlquery options specified at same time, sqlquery will be used
#
# the measurement field defines measurement name for metrics produced
# by the query. Default is "postgresql".
#
# the tagvalue field is used to define custom tags (separated by comas).
# the query is expected to return columns which match the names of the
# defined tags. The values in these columns must be of a string-type,
# a number-type or a blob-type.
#
# The timestamp field is used to override the data points timestamp value. By
# default, all rows inserted with current time. By setting a timestamp column,
# the row will be inserted with that column's value.
#
# The min_version field specifies minimal database version this query
# will run on.
#
# The max_version field when set specifies maximal database version
# this query will NOT run on.
#
# Database version in `minversion` and `maxversion` is represented as
# a single integer without last component, for example:
# 9.6.2 -> 906
# 15.2 -> 1500
#
# Structure :
# [[inputs.postgresql_extensible.query]]
# measurement string
# sqlquery string
# min_version int
# max_version int
# withdbname boolean
# tagvalue string (coma separated)
# timestamp string
[[inputs.postgresql_extensible.query]]
measurement="pg_stat_database"
sqlquery="SELECT * FROM pg_stat_database WHERE datname"
min_version=901
tagvalue=""
[[inputs.postgresql_extensible.query]]
script="your_sql-filepath.sql"
min_version=901
max_version=1300
tagvalue=""

View file

@ -0,0 +1 @@
select * from pg_stat_database