1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,250 @@
# Redis Input Plugin
The Redis input plugin gathers metrics from one or many Redis servers.
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Configuration
```toml @sample.conf
# Read metrics from one or many redis servers
[[inputs.redis]]
## specify servers via a url matching:
## [protocol://][username:password]@address[:port]
## e.g.
## tcp://localhost:6379
## tcp://username:password@192.168.99.100
## unix:///var/run/redis.sock
##
## If no servers are specified, then localhost is used as the host.
## If no port is specified, 6379 is used
servers = ["tcp://localhost:6379"]
## Optional. Specify redis commands to retrieve values
# [[inputs.redis.commands]]
# # The command to run where each argument is a separate element
# command = ["get", "sample-key"]
# # The field to store the result in
# field = "sample-key-value"
# # The type of the result
# # Can be "string", "integer", or "float"
# type = "string"
## Specify username and password for ACL auth (Redis 6.0+). You can add this
## to the server URI above or specify it here. The values here take
## precedence.
# username = ""
# password = ""
## Optional TLS Config
## Check tls/config.go ClientConfig for more options
# tls_enable = true
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = true
```
## Metrics
The plugin gathers the results of the [INFO](https://redis.io/commands/info)
redis command. There are two separate measurements: _redis_ and
_redis\_keyspace_, the latter is used for gathering database related statistics.
Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate)
and the elapsed time since the last rdb save (rdb\_last\_save\_time\_elapsed).
- redis
- keyspace_hitrate(float, number)
- rdb_last_save_time_elapsed(int, seconds)
**Server**
- uptime(int, seconds)
- lru_clock(int, number)
- redis_version(string)
**Clients**
- clients(int, number)
- client_longest_output_list(int, number)
- client_biggest_input_buf(int, number)
- blocked_clients(int, number)
**Memory**
- used_memory(int, bytes)
- used_memory_rss(int, bytes)
- used_memory_peak(int, bytes)
- total_system_memory(int, bytes)
- used_memory_lua(int, bytes)
- maxmemory(int, bytes)
- maxmemory_policy(string)
- mem_fragmentation_ratio(float, number)
**Persistence**
- loading(int,flag)
- rdb_changes_since_last_save(int, number)
- rdb_bgsave_in_progress(int, flag)
- rdb_last_save_time(int, seconds)
- rdb_last_bgsave_status(string)
- rdb_last_bgsave_time_sec(int, seconds)
- rdb_current_bgsave_time_sec(int, seconds)
- aof_enabled(int, flag)
- aof_rewrite_in_progress(int, flag)
- aof_rewrite_scheduled(int, flag)
- aof_last_rewrite_time_sec(int, seconds)
- aof_current_rewrite_time_sec(int, seconds)
- aof_last_bgrewrite_status(string)
- aof_last_write_status(string)
**Stats**
- total_connections_received(int, number)
- total_commands_processed(int, number)
- instantaneous_ops_per_sec(int, number)
- total_net_input_bytes(int, bytes)
- total_net_output_bytes(int, bytes)
- instantaneous_input_kbps(float, KB/sec)
- instantaneous_output_kbps(float, KB/sec)
- rejected_connections(int, number)
- sync_full(int, number)
- sync_partial_ok(int, number)
- sync_partial_err(int, number)
- expired_keys(int, number)
- evicted_keys(int, number)
- keyspace_hits(int, number)
- keyspace_misses(int, number)
- pubsub_channels(int, number)
- pubsub_patterns(int, number)
- latest_fork_usec(int, microseconds)
- migrate_cached_sockets(int, number)
**Replication**
- connected_slaves(int, number)
- master_link_down_since_seconds(int, number)
- master_link_status(string)
- master_repl_offset(int, number)
- second_repl_offset(int, number)
- repl_backlog_active(int, number)
- repl_backlog_size(int, bytes)
- repl_backlog_first_byte_offset(int, number)
- repl_backlog_histlen(int, bytes)
**CPU**
- used_cpu_sys(float, number)
- used_cpu_user(float, number)
- used_cpu_sys_children(float, number)
- used_cpu_user_children(float, number)
**Cluster**
- cluster_enabled(int, flag)
- redis_keyspace
- keys(int, number)
- expires(int, number)
- avg_ttl(int, number)
- redis_cmdstat
Every Redis used command could have the following fields:
- calls(int, number)
- failed_calls(int, number)
- rejected_calls(int, number)
- usec(int, mircoseconds)
- usec_per_call(float, microseconds)
- redis_latency_percentiles_usec
- fields:
- p50(float, microseconds)
- p99(float, microseconds)
- p99.9(float, microseconds)
- redis_replication
- tags:
- replication_role
- replica_ip
- replica_port
- state (either "online", "wait_bgsave", or "send_bulk")
- fields:
- lag(int, number)
- offset(int, number)
- redis_errorstat
- tags:
- err
- fields:
- total (int, number)
### Tags
- All measurements have the following tags:
- port
- server
- replication_role
- The redis_keyspace measurement has an additional database tag:
- database
- The redis_cmdstat measurement has an additional command tag:
- command
- The redis_latency_percentiles_usec measurement has an additional command tag:
- command
## Example Output
Using this configuration:
```toml
[[inputs.redis]]
## specify servers via a url matching:
## [protocol://][:password]@address[:port]
## e.g.
## tcp://localhost:6379
## tcp://:password@192.168.99.100
##
## If no servers are specified, then localhost is used as the host.
## If no port is specified, 6379 is used
servers = ["tcp://localhost:6379"]
```
When run with:
```sh
./telegraf --config telegraf.conf --input-filter redis --test
```
It produces:
```text
redis,server=localhost,port=6379,replication_role=master,host=host keyspace_hitrate=1,clients=2i,blocked_clients=0i,instantaneous_input_kbps=0,sync_full=0i,pubsub_channels=0i,pubsub_patterns=0i,total_net_output_bytes=6659253i,used_memory=842448i,total_system_memory=8351916032i,aof_current_rewrite_time_sec=-1i,rdb_changes_since_last_save=0i,sync_partial_err=0i,latest_fork_usec=508i,instantaneous_output_kbps=0,expired_keys=0i,used_memory_peak=843416i,aof_rewrite_in_progress=0i,aof_last_bgrewrite_status="ok",migrate_cached_sockets=0i,connected_slaves=0i,maxmemory_policy="noeviction",aof_rewrite_scheduled=0i,total_net_input_bytes=3125i,used_memory_rss=9564160i,repl_backlog_histlen=0i,rdb_last_bgsave_status="ok",aof_last_rewrite_time_sec=-1i,keyspace_misses=0i,client_biggest_input_buf=5i,used_cpu_user=1.33,maxmemory=0i,rdb_current_bgsave_time_sec=-1i,total_commands_processed=271i,repl_backlog_size=1048576i,used_cpu_sys=3,uptime=2822i,lru_clock=16706281i,used_memory_lua=37888i,rejected_connections=0i,sync_partial_ok=0i,evicted_keys=0i,rdb_last_save_time_elapsed=1922i,rdb_last_save_time=1493099368i,instantaneous_ops_per_sec=0i,used_cpu_user_children=0,client_longest_output_list=0i,master_repl_offset=0i,repl_backlog_active=0i,keyspace_hits=2i,used_cpu_sys_children=0,cluster_enabled=0i,rdb_last_bgsave_time_sec=0i,aof_last_write_status="ok",total_connections_received=263i,aof_enabled=0i,repl_backlog_first_byte_offset=0i,mem_fragmentation_ratio=11.35,loading=0i,rdb_bgsave_in_progress=0i 1493101290000000000
```
redis_keyspace:
```text
redis_keyspace,database=db1,host=host,server=localhost,port=6379,replication_role=master keys=1i,expires=0i,avg_ttl=0i 1493101350000000000
```
redis_command:
```text
redis_cmdstat,command=publish,host=host,port=6379,replication_role=master,server=localhost calls=569514i,failed_calls=0i,rejected_calls=0i,usec=9916334i,usec_per_call=17.41 1559227136000000000
```
redis_latency_percentiles_usec:
```text
redis_latency_percentiles_usec,command=zadd,host=host,port=6379,replication_role=master,server=localhost p50=9.023,p99=28.031,p99.9=43.007 1559227136000000000
```
redis_error:
```text
redis_errorstat,err=MOVED,host=host,port=6379,replication_role=master,server=localhost total=4284 1691119309000000000
```

View file

@ -0,0 +1,783 @@
//go:generate ../../../tools/readme_config_includer/generator
package redis
import (
"bufio"
"context"
_ "embed"
"fmt"
"io"
"net/url"
"reflect"
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/go-redis/redis/v8"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/common/tls"
"github.com/influxdata/telegraf/plugins/inputs"
)
//go:embed sample.conf
var sampleConfig string
var (
replicationSlaveMetricPrefix = regexp.MustCompile(`^slave\d+`)
tracking = map[string]string{
"uptime_in_seconds": "uptime",
"connected_clients": "clients",
"role": "replication_role",
}
)
type Redis struct {
Commands []*redisCommand `toml:"commands"`
Servers []string `toml:"servers"`
Username string `toml:"username"`
Password string `toml:"password"`
tls.ClientConfig
Log telegraf.Logger `toml:"-"`
clients []client
connected bool
}
type redisCommand struct {
Command []interface{} `toml:"command"`
Field string `toml:"field"`
Type string `toml:"type"`
}
type redisClient struct {
client *redis.Client
tags map[string]string
}
// redisFieldTypes defines the types expected for each of the fields redis reports on
type redisFieldTypes struct {
ActiveDefragHits int64 `json:"active_defrag_hits"`
ActiveDefragKeyHits int64 `json:"active_defrag_key_hits"`
ActiveDefragKeyMisses int64 `json:"active_defrag_key_misses"`
ActiveDefragMisses int64 `json:"active_defrag_misses"`
ActiveDefragRunning int64 `json:"active_defrag_running"`
AllocatorActive int64 `json:"allocator_active"`
AllocatorAllocated int64 `json:"allocator_allocated"`
AllocatorFragBytes float64 `json:"allocator_frag_bytes"` // for historical reasons this was left as float although redis reports it as an int
AllocatorFragRatio float64 `json:"allocator_frag_ratio"`
AllocatorResident int64 `json:"allocator_resident"`
AllocatorRssBytes int64 `json:"allocator_rss_bytes"`
AllocatorRssRatio float64 `json:"allocator_rss_ratio"`
AofCurrentRewriteTimeSec int64 `json:"aof_current_rewrite_time_sec"`
AofEnabled int64 `json:"aof_enabled"`
AofLastBgrewriteStatus string `json:"aof_last_bgrewrite_status"`
AofLastCowSize int64 `json:"aof_last_cow_size"`
AofLastRewriteTimeSec int64 `json:"aof_last_rewrite_time_sec"`
AofLastWriteStatus string `json:"aof_last_write_status"`
AofRewriteInProgress int64 `json:"aof_rewrite_in_progress"`
AofRewriteScheduled int64 `json:"aof_rewrite_scheduled"`
BlockedClients int64 `json:"blocked_clients"`
ClientRecentMaxInputBuffer int64 `json:"client_recent_max_input_buffer"`
ClientRecentMaxOutputBuffer int64 `json:"client_recent_max_output_buffer"`
Clients int64 `json:"clients"`
ClientsInTimeoutTable int64 `json:"clients_in_timeout_table"`
ClusterEnabled int64 `json:"cluster_enabled"`
ConnectedSlaves int64 `json:"connected_slaves"`
EvictedKeys int64 `json:"evicted_keys"`
ExpireCycleCPUMilliseconds int64 `json:"expire_cycle_cpu_milliseconds"`
ExpiredKeys int64 `json:"expired_keys"`
ExpiredStalePerc float64 `json:"expired_stale_perc"`
ExpiredTimeCapReachedCount int64 `json:"expired_time_cap_reached_count"`
InstantaneousInputKbps float64 `json:"instantaneous_input_kbps"`
InstantaneousOpsPerSec int64 `json:"instantaneous_ops_per_sec"`
InstantaneousOutputKbps float64 `json:"instantaneous_output_kbps"`
IoThreadedReadsProcessed int64 `json:"io_threaded_reads_processed"`
IoThreadedWritesProcessed int64 `json:"io_threaded_writes_processed"`
KeyspaceHits int64 `json:"keyspace_hits"`
KeyspaceMisses int64 `json:"keyspace_misses"`
LatestForkUsec int64 `json:"latest_fork_usec"`
LazyfreePendingObjects int64 `json:"lazyfree_pending_objects"`
Loading int64 `json:"loading"`
LruClock int64 `json:"lru_clock"`
MasterReplOffset int64 `json:"master_repl_offset"`
MaxMemory int64 `json:"maxmemory"`
MaxMemoryPolicy string `json:"maxmemory_policy"`
MemAofBuffer int64 `json:"mem_aof_buffer"`
MemClientsNormal int64 `json:"mem_clients_normal"`
MemClientsSlaves int64 `json:"mem_clients_slaves"`
MemFragmentationBytes int64 `json:"mem_fragmentation_bytes"`
MemFragmentationRatio float64 `json:"mem_fragmentation_ratio"`
MemNotCountedForEvict int64 `json:"mem_not_counted_for_evict"`
MemReplicationBacklog int64 `json:"mem_replication_backlog"`
MigrateCachedSockets int64 `json:"migrate_cached_sockets"`
ModuleForkInProgress int64 `json:"module_fork_in_progress"`
ModuleForkLastCowSize int64 `json:"module_fork_last_cow_size"`
NumberOfCachedScripts int64 `json:"number_of_cached_scripts"`
PubsubChannels int64 `json:"pubsub_channels"`
PubsubPatterns int64 `json:"pubsub_patterns"`
RdbBgsaveInProgress int64 `json:"rdb_bgsave_in_progress"`
RdbChangesSinceLastSave int64 `json:"rdb_changes_since_last_save"`
RdbCurrentBgsaveTimeSec int64 `json:"rdb_current_bgsave_time_sec"`
RdbLastBgsaveStatus string `json:"rdb_last_bgsave_status"`
RdbLastBgsaveTimeSec int64 `json:"rdb_last_bgsave_time_sec"`
RdbLastCowSize int64 `json:"rdb_last_cow_size"`
RdbLastSaveTime int64 `json:"rdb_last_save_time"`
RdbLastSaveTimeElapsed int64 `json:"rdb_last_save_time_elapsed"`
RedisVersion string `json:"redis_version"`
RejectedConnections int64 `json:"rejected_connections"`
ReplBacklogActive int64 `json:"repl_backlog_active"`
ReplBacklogFirstByteOffset int64 `json:"repl_backlog_first_byte_offset"`
ReplBacklogHistlen int64 `json:"repl_backlog_histlen"`
ReplBacklogSize int64 `json:"repl_backlog_size"`
RssOverheadBytes int64 `json:"rss_overhead_bytes"`
RssOverheadRatio float64 `json:"rss_overhead_ratio"`
SecondReplOffset int64 `json:"second_repl_offset"`
SlaveExpiresTrackedKeys int64 `json:"slave_expires_tracked_keys"`
SyncFull int64 `json:"sync_full"`
SyncPartialErr int64 `json:"sync_partial_err"`
SyncPartialOk int64 `json:"sync_partial_ok"`
TotalCommandsProcessed int64 `json:"total_commands_processed"`
TotalConnectionsReceived int64 `json:"total_connections_received"`
TotalNetInputBytes int64 `json:"total_net_input_bytes"`
TotalNetOutputBytes int64 `json:"total_net_output_bytes"`
TotalReadsProcessed int64 `json:"total_reads_processed"`
TotalSystemMemory int64 `json:"total_system_memory"`
TotalWritesProcessed int64 `json:"total_writes_processed"`
TrackingClients int64 `json:"tracking_clients"`
TrackingTotalItems int64 `json:"tracking_total_items"`
TrackingTotalKeys int64 `json:"tracking_total_keys"`
TrackingTotalPrefixes int64 `json:"tracking_total_prefixes"`
UnexpectedErrorReplies int64 `json:"unexpected_error_replies"`
Uptime int64 `json:"uptime"`
UsedCPUSys float64 `json:"used_cpu_sys"`
UsedCPUSysChildren float64 `json:"used_cpu_sys_children"`
UsedCPUUser float64 `json:"used_cpu_user"`
UsedCPUUserChildren float64 `json:"used_cpu_user_children"`
UsedMemory int64 `json:"used_memory"`
UsedMemoryDataset int64 `json:"used_memory_dataset"`
UsedMemoryDatasetPerc float64 `json:"used_memory_dataset_perc"`
UsedMemoryLua int64 `json:"used_memory_lua"`
UsedMemoryOverhead int64 `json:"used_memory_overhead"`
UsedMemoryPeak int64 `json:"used_memory_peak"`
UsedMemoryPeakPerc float64 `json:"used_memory_peak_perc"`
UsedMemoryRss int64 `json:"used_memory_rss"`
UsedMemoryScripts int64 `json:"used_memory_scripts"`
UsedMemoryStartup int64 `json:"used_memory_startup"`
}
type client interface {
do(returnType string, args ...interface{}) (interface{}, error)
info() *redis.StringCmd
baseTags() map[string]string
close() error
}
func (*Redis) SampleConfig() string {
return sampleConfig
}
func (r *Redis) Init() error {
for _, command := range r.Commands {
if command.Type != "string" && command.Type != "integer" && command.Type != "float" {
return fmt.Errorf(`unknown result type: expected one of "string", "integer", "float"; got %q`, command.Type)
}
}
return nil
}
func (*Redis) Start(telegraf.Accumulator) error {
return nil
}
func (r *Redis) Gather(acc telegraf.Accumulator) error {
if !r.connected {
err := r.connect()
if err != nil {
return err
}
}
var wg sync.WaitGroup
for _, cl := range r.clients {
wg.Add(1)
go func(client client) {
defer wg.Done()
acc.AddError(gatherServer(client, acc))
acc.AddError(r.gatherCommandValues(client, acc))
}(cl)
}
wg.Wait()
return nil
}
// Stop close the client through ServiceInput interface Start/Stop methods impl.
func (r *Redis) Stop() {
for _, c := range r.clients {
err := c.close()
if err != nil {
r.Log.Errorf("error closing client: %v", err)
}
}
}
func (r *Redis) connect() error {
if r.connected {
return nil
}
if len(r.Servers) == 0 {
r.Servers = []string{"tcp://localhost:6379"}
}
r.clients = make([]client, 0, len(r.Servers))
for _, serv := range r.Servers {
if !strings.HasPrefix(serv, "tcp://") && !strings.HasPrefix(serv, "unix://") {
r.Log.Warn("Server URL found without scheme; please update your configuration file")
serv = "tcp://" + serv
}
u, err := url.Parse(serv)
if err != nil {
return fmt.Errorf("unable to parse to address %q: %w", serv, err)
}
username := ""
password := ""
if u.User != nil {
username = u.User.Username()
pw, ok := u.User.Password()
if ok {
password = pw
}
}
if len(r.Username) > 0 {
username = r.Username
}
if len(r.Password) > 0 {
password = r.Password
}
var address string
if u.Scheme == "unix" {
address = u.Path
} else {
address = u.Host
}
tlsConfig, err := r.ClientConfig.TLSConfig()
if err != nil {
return err
}
client := redis.NewClient(
&redis.Options{
Addr: address,
Username: username,
Password: password,
Network: u.Scheme,
PoolSize: 1,
TLSConfig: tlsConfig,
},
)
tags := make(map[string]string, 2)
if u.Scheme == "unix" {
tags["socket"] = u.Path
} else {
tags["server"] = u.Hostname()
tags["port"] = u.Port()
}
r.clients = append(r.clients, &redisClient{
client: client,
tags: tags,
})
}
r.connected = true
return nil
}
func (r *Redis) gatherCommandValues(client client, acc telegraf.Accumulator) error {
fields := make(map[string]interface{})
for _, command := range r.Commands {
val, err := client.do(command.Type, command.Command...)
if err != nil {
if strings.Contains(err.Error(), "unexpected type=") {
return fmt.Errorf("could not get command result: %w", err)
}
return err
}
fields[command.Field] = val
}
acc.AddFields("redis_commands", fields, client.baseTags())
return nil
}
func (r *redisClient) do(returnType string, args ...interface{}) (interface{}, error) {
rawVal := r.client.Do(context.Background(), args...)
switch returnType {
case "integer":
return rawVal.Int64()
case "string":
return rawVal.Text()
case "float":
return rawVal.Float64()
default:
return rawVal.Text()
}
}
func (r *redisClient) info() *redis.StringCmd {
return r.client.Info(context.Background(), "ALL")
}
func (r *redisClient) baseTags() map[string]string {
tags := make(map[string]string)
for k, v := range r.tags {
tags[k] = v
}
return tags
}
func (r *redisClient) close() error {
return r.client.Close()
}
func gatherServer(client client, acc telegraf.Accumulator) error {
info, err := client.info().Result()
if err != nil {
return err
}
rdr := strings.NewReader(info)
return gatherInfoOutput(rdr, acc, client.baseTags())
}
func gatherInfoOutput(rdr io.Reader, acc telegraf.Accumulator, tags map[string]string) error {
var section string
var keyspaceHits, keyspaceMisses int64
scanner := bufio.NewScanner(rdr)
fields := make(map[string]interface{})
for scanner.Scan() {
line := scanner.Text()
if len(line) == 0 {
continue
}
if line[0] == '#' {
if len(line) > 2 {
section = line[2:]
}
continue
}
parts := strings.SplitN(line, ":", 2)
if len(parts) < 2 {
continue
}
name := parts[0]
if section == "Server" {
if name != "lru_clock" && name != "uptime_in_seconds" && name != "redis_version" {
continue
}
}
if strings.HasPrefix(name, "master_replid") {
continue
}
if name == "mem_allocator" {
continue
}
if strings.HasSuffix(name, "_human") {
continue
}
metric, ok := tracking[name]
if !ok {
if section == "Keyspace" {
kline := strings.TrimSpace(parts[1])
gatherKeyspaceLine(name, kline, acc, tags)
continue
}
if section == "Commandstats" {
kline := strings.TrimSpace(parts[1])
gatherCommandStateLine(name, kline, acc, tags)
continue
}
if section == "Latencystats" {
kline := strings.TrimSpace(parts[1])
gatherLatencyStatsLine(name, kline, acc, tags)
continue
}
if section == "Replication" && replicationSlaveMetricPrefix.MatchString(name) {
kline := strings.TrimSpace(parts[1])
gatherReplicationLine(name, kline, acc, tags)
continue
}
if section == "Errorstats" {
kline := strings.TrimSpace(parts[1])
gatherErrorStatsLine(name, kline, acc, tags)
continue
}
metric = name
}
val := strings.TrimSpace(parts[1])
// Some percentage values have a "%" suffix that we need to get rid of before int/float conversion
val = strings.TrimSuffix(val, "%")
// Try parsing as int
if ival, err := strconv.ParseInt(val, 10, 64); err == nil {
switch name {
case "keyspace_hits":
keyspaceHits = ival
case "keyspace_misses":
keyspaceMisses = ival
case "rdb_last_save_time":
// influxdb can't calculate this, so we have to do it
fields["rdb_last_save_time_elapsed"] = time.Now().Unix() - ival
}
fields[metric] = ival
continue
}
// Try parsing as a float
if fval, err := strconv.ParseFloat(val, 64); err == nil {
fields[metric] = fval
continue
}
// Treat it as a string
if name == "role" {
tags["replication_role"] = val
continue
}
fields[metric] = val
}
var keyspaceHitrate float64
if keyspaceHits != 0 || keyspaceMisses != 0 {
keyspaceHitrate = float64(keyspaceHits) / float64(keyspaceHits+keyspaceMisses)
}
fields["keyspace_hitrate"] = keyspaceHitrate
o := redisFieldTypes{}
setStructFieldsFromObject(fields, &o)
setExistingFieldsFromStruct(fields, &o)
acc.AddFields("redis", fields, tags)
return nil
}
// Parse the special Keyspace line at end of redis stats
// This is a special line that looks something like:
//
// db0:keys=2,expires=0,avg_ttl=0
//
// And there is one for each db on the redis instance
func gatherKeyspaceLine(name, line string, acc telegraf.Accumulator, globalTags map[string]string) {
if strings.Contains(line, "keys=") {
fields := make(map[string]interface{})
tags := make(map[string]string)
for k, v := range globalTags {
tags[k] = v
}
tags["database"] = name
dbparts := strings.Split(line, ",")
for _, dbp := range dbparts {
kv := strings.Split(dbp, "=")
ival, err := strconv.ParseInt(kv[1], 10, 64)
if err == nil {
fields[kv[0]] = ival
}
}
acc.AddFields("redis_keyspace", fields, tags)
}
}
// Parse the special cmdstat lines.
// Example:
//
// cmdstat_publish:calls=33791,usec=208789,usec_per_call=6.18
//
// Tag: command=publish; Fields: calls=33791i,usec=208789i,usec_per_call=6.18
func gatherCommandStateLine(name, line string, acc telegraf.Accumulator, globalTags map[string]string) {
if !strings.HasPrefix(name, "cmdstat") {
return
}
fields := make(map[string]interface{})
tags := make(map[string]string)
for k, v := range globalTags {
tags[k] = v
}
tags["command"] = strings.TrimPrefix(name, "cmdstat_")
parts := strings.Split(line, ",")
for _, part := range parts {
kv := strings.Split(part, "=")
if len(kv) != 2 {
continue
}
switch kv[0] {
case "calls":
fallthrough
case "usec", "rejected_calls", "failed_calls":
ival, err := strconv.ParseInt(kv[1], 10, 64)
if err == nil {
fields[kv[0]] = ival
}
case "usec_per_call":
fval, err := strconv.ParseFloat(kv[1], 64)
if err == nil {
fields[kv[0]] = fval
}
}
}
acc.AddFields("redis_cmdstat", fields, tags)
}
// Parse the special latency_percentiles_usec lines.
// Example:
//
// latency_percentiles_usec_zadd:p50=9.023,p99=28.031,p99.9=43.007
//
// Tag: command=zadd; Fields: p50=9.023,p99=28.031,p99.9=43.007
func gatherLatencyStatsLine(name, line string, acc telegraf.Accumulator, globalTags map[string]string) {
if !strings.HasPrefix(name, "latency_percentiles_usec") {
return
}
fields := make(map[string]interface{})
tags := make(map[string]string)
for k, v := range globalTags {
tags[k] = v
}
tags["command"] = strings.TrimPrefix(name, "latency_percentiles_usec_")
parts := strings.Split(line, ",")
for _, part := range parts {
kv := strings.Split(part, "=")
if len(kv) != 2 {
continue
}
switch kv[0] {
case "p50", "p99", "p99.9":
fval, err := strconv.ParseFloat(kv[1], 64)
if err == nil {
fields[kv[0]] = fval
}
}
}
acc.AddFields("redis_latency_percentiles_usec", fields, tags)
}
// Parse the special Replication line
// Example:
//
// slave0:ip=127.0.0.1,port=7379,state=online,offset=4556468,lag=0
//
// This line will only be visible when a node has a replica attached.
func gatherReplicationLine(name, line string, acc telegraf.Accumulator, globalTags map[string]string) {
fields := make(map[string]interface{})
tags := make(map[string]string)
for k, v := range globalTags {
tags[k] = v
}
tags["replica_id"] = strings.TrimLeft(name, "slave")
tags["replication_role"] = "slave"
parts := strings.Split(line, ",")
for _, part := range parts {
kv := strings.Split(part, "=")
if len(kv) != 2 {
continue
}
switch kv[0] {
case "ip":
tags["replica_ip"] = kv[1]
case "port":
tags["replica_port"] = kv[1]
case "state":
tags[kv[0]] = kv[1]
default:
ival, err := strconv.ParseInt(kv[1], 10, 64)
if err == nil {
fields[kv[0]] = ival
}
}
}
acc.AddFields("redis_replication", fields, tags)
}
// Parse the special Errorstats lines.
// Example:
//
// errorstat_ERR:count=37
// errorstat_MOVED:count=3626
func gatherErrorStatsLine(name, line string, acc telegraf.Accumulator, globalTags map[string]string) {
tags := make(map[string]string, len(globalTags)+1)
for k, v := range globalTags {
tags[k] = v
}
tags["err"] = strings.TrimPrefix(name, "errorstat_")
kv := strings.Split(line, "=")
if len(kv) < 2 {
acc.AddError(fmt.Errorf("invalid line for %q: %s", name, line))
return
}
ival, err := strconv.ParseInt(kv[1], 10, 64)
if err != nil {
acc.AddError(fmt.Errorf("parsing value in line %q failed: %w", line, err))
return
}
fields := map[string]interface{}{"total": ival}
acc.AddFields("redis_errorstat", fields, tags)
}
func setExistingFieldsFromStruct(fields map[string]interface{}, o *redisFieldTypes) {
val := reflect.ValueOf(o).Elem()
typ := val.Type()
for key := range fields {
if _, exists := fields[key]; exists {
for i := 0; i < typ.NumField(); i++ {
f := typ.Field(i)
jsonFieldName := f.Tag.Get("json")
if jsonFieldName == key {
fields[key] = val.Field(i).Interface()
break
}
}
}
}
}
func setStructFieldsFromObject(fields map[string]interface{}, o *redisFieldTypes) {
val := reflect.ValueOf(o).Elem()
typ := val.Type()
for key, value := range fields {
if _, exists := fields[key]; exists {
for i := 0; i < typ.NumField(); i++ {
f := typ.Field(i)
jsonFieldName := f.Tag.Get("json")
if jsonFieldName == key {
structFieldValue := val.Field(i)
structFieldValue.Set(coerceType(value, structFieldValue.Type()))
break
}
}
}
}
}
func coerceType(value interface{}, typ reflect.Type) reflect.Value {
switch sourceType := value.(type) {
case bool:
switch typ.Kind() {
case reflect.String:
if sourceType {
value = "true"
} else {
value = "false"
}
case reflect.Int64:
if sourceType {
value = int64(1)
} else {
value = int64(0)
}
case reflect.Float64:
if sourceType {
value = float64(1)
} else {
value = float64(0)
}
default:
panic("unhandled destination type " + typ.Kind().String())
}
case int, int8, int16, int32, int64:
switch typ.Kind() {
case reflect.String:
value = fmt.Sprintf("%d", value)
case reflect.Int64:
// types match
case reflect.Float64:
value = float64(reflect.ValueOf(sourceType).Int())
default:
panic("unhandled destination type " + typ.Kind().String())
}
case uint, uint8, uint16, uint32, uint64:
switch typ.Kind() {
case reflect.String:
value = fmt.Sprintf("%d", value)
case reflect.Int64:
// types match
case reflect.Float64:
value = float64(reflect.ValueOf(sourceType).Uint())
default:
panic("unhandled destination type " + typ.Kind().String())
}
case float32, float64:
switch typ.Kind() {
case reflect.String:
value = fmt.Sprintf("%f", value)
case reflect.Int64:
value = int64(reflect.ValueOf(sourceType).Float())
case reflect.Float64:
// types match
default:
panic("unhandled destination type " + typ.Kind().String())
}
case string:
switch typ.Kind() {
case reflect.String:
// types match
case reflect.Int64:
//nolint:errcheck // no way to propagate, shouldn't panic
value, _ = strconv.ParseInt(value.(string), 10, 64)
case reflect.Float64:
//nolint:errcheck // no way to propagate, shouldn't panic
value, _ = strconv.ParseFloat(value.(string), 64)
default:
panic("unhandled destination type " + typ.Kind().String())
}
default:
panic(fmt.Sprintf("unhandled source type %T", sourceType))
}
return reflect.ValueOf(value)
}
func init() {
inputs.Add("redis", func() telegraf.Input {
return &Redis{}
})
}

View file

@ -0,0 +1,584 @@
package redis
import (
"bufio"
"fmt"
"strings"
"testing"
"time"
"github.com/docker/go-connections/nat"
"github.com/go-redis/redis/v8"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go/wait"
"github.com/influxdata/telegraf/testutil"
)
type testClient struct{}
func (*testClient) baseTags() map[string]string {
return map[string]string{"host": "redis.net"}
}
func (*testClient) info() *redis.StringCmd {
return nil
}
func (*testClient) do(string, ...interface{}) (interface{}, error) {
return 2, nil
}
func (*testClient) close() error {
return nil
}
func TestRedisConnectIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
servicePort := "6379"
container := testutil.Container{
Image: "redis:alpine",
ExposedPorts: []string{servicePort},
WaitingFor: wait.ForListeningPort(nat.Port(servicePort)),
}
err := container.Start()
require.NoError(t, err, "failed to start container")
defer container.Terminate()
addr := fmt.Sprintf("%s:%s", container.Address, container.Ports[servicePort])
r := &Redis{
Log: testutil.Logger{},
Servers: []string{addr},
}
var acc testutil.Accumulator
err = acc.GatherError(r.Gather)
require.NoError(t, err)
}
func TestRedis_Commands(t *testing.T) {
const redisListKey = "test-list-length"
var acc testutil.Accumulator
tc := &testClient{}
rc := &redisCommand{
Command: []interface{}{"llen", "test-list"},
Field: redisListKey,
Type: "integer",
}
r := &Redis{
Commands: []*redisCommand{rc},
clients: []client{tc},
}
err := r.gatherCommandValues(tc, &acc)
require.NoError(t, err)
fields := map[string]interface{}{
redisListKey: 2,
}
acc.AssertContainsFields(t, "redis_commands", fields)
}
func TestRedis_ParseMetrics(t *testing.T) {
var acc testutil.Accumulator
tags := map[string]string{"host": "redis.net"}
rdr := bufio.NewReader(strings.NewReader(testOutput))
err := gatherInfoOutput(rdr, &acc, tags)
require.NoError(t, err)
tags = map[string]string{"host": "redis.net", "replication_role": "master"}
fields := map[string]interface{}{
"uptime": int64(238),
"lru_clock": int64(2364819),
"clients": int64(1),
"client_longest_output_list": int64(0),
"client_biggest_input_buf": int64(0),
"blocked_clients": int64(0),
"used_memory": int64(1003936),
"used_memory_rss": int64(811008),
"used_memory_peak": int64(1003936),
"used_memory_lua": int64(33792),
"used_memory_peak_perc": float64(93.58),
"used_memory_dataset_perc": float64(20.27),
"mem_fragmentation_ratio": float64(0.81),
"loading": int64(0),
"rdb_changes_since_last_save": int64(0),
"rdb_bgsave_in_progress": int64(0),
"rdb_last_save_time": int64(1428427941),
"rdb_last_bgsave_status": "ok",
"rdb_last_bgsave_time_sec": int64(-1),
"rdb_current_bgsave_time_sec": int64(-1),
"aof_enabled": int64(0),
"aof_rewrite_in_progress": int64(0),
"aof_rewrite_scheduled": int64(0),
"aof_last_rewrite_time_sec": int64(-1),
"aof_current_rewrite_time_sec": int64(-1),
"aof_last_bgrewrite_status": "ok",
"aof_last_write_status": "ok",
"total_connections_received": int64(2),
"total_commands_processed": int64(1),
"instantaneous_ops_per_sec": int64(0),
"instantaneous_input_kbps": float64(876.16),
"instantaneous_output_kbps": float64(3010.23),
"rejected_connections": int64(0),
"sync_full": int64(0),
"sync_partial_ok": int64(0),
"sync_partial_err": int64(0),
"expired_keys": int64(0),
"evicted_keys": int64(0),
"keyspace_hits": int64(1),
"keyspace_misses": int64(1),
"pubsub_channels": int64(0),
"pubsub_patterns": int64(0),
"latest_fork_usec": int64(0),
"connected_slaves": int64(2),
"master_repl_offset": int64(0),
"repl_backlog_active": int64(0),
"repl_backlog_size": int64(1048576),
"repl_backlog_first_byte_offset": int64(0),
"repl_backlog_histlen": int64(0),
"second_repl_offset": int64(-1),
"used_cpu_sys": float64(0.14),
"used_cpu_user": float64(0.05),
"used_cpu_sys_children": float64(0.00),
"used_cpu_user_children": float64(0.00),
"keyspace_hitrate": float64(0.50),
"redis_version": "6.0.9",
"active_defrag_hits": int64(0),
"active_defrag_key_hits": int64(0),
"active_defrag_key_misses": int64(0),
"active_defrag_misses": int64(0),
"active_defrag_running": int64(0),
"allocator_active": int64(1022976),
"allocator_allocated": int64(1019632),
"allocator_frag_bytes": float64(3344),
"allocator_frag_ratio": float64(1.00),
"allocator_resident": int64(1022976),
"allocator_rss_bytes": int64(0),
"allocator_rss_ratio": float64(1.00),
"aof_last_cow_size": int64(0),
"client_recent_max_input_buffer": int64(16),
"client_recent_max_output_buffer": int64(0),
"clients_in_timeout_table": int64(0),
"cluster_enabled": int64(0),
"expire_cycle_cpu_milliseconds": int64(669),
"expired_stale_perc": float64(0.00),
"expired_time_cap_reached_count": int64(0),
"io_threaded_reads_processed": int64(0),
"io_threaded_writes_processed": int64(0),
"total_reads_processed": int64(31),
"total_writes_processed": int64(17),
"lazyfree_pending_objects": int64(0),
"maxmemory": int64(0),
"maxmemory_policy": "noeviction",
"mem_aof_buffer": int64(0),
"mem_clients_normal": int64(17440),
"mem_clients_slaves": int64(0),
"mem_fragmentation_bytes": int64(41232),
"mem_not_counted_for_evict": int64(0),
"mem_replication_backlog": int64(0),
"rss_overhead_bytes": int64(37888),
"rss_overhead_ratio": float64(1.04),
"total_system_memory": int64(17179869184),
"used_memory_dataset": int64(47088),
"used_memory_overhead": int64(1019152),
"used_memory_scripts": int64(0),
"used_memory_startup": int64(1001712),
"migrate_cached_sockets": int64(0),
"module_fork_in_progress": int64(0),
"module_fork_last_cow_size": int64(0),
"number_of_cached_scripts": int64(0),
"rdb_last_cow_size": int64(0),
"slave_expires_tracked_keys": int64(0),
"unexpected_error_replies": int64(0),
"total_net_input_bytes": int64(381),
"total_net_output_bytes": int64(71521),
"tracking_clients": int64(0),
"tracking_total_items": int64(0),
"tracking_total_keys": int64(0),
"tracking_total_prefixes": int64(0),
}
// We have to test rdb_last_save_time_offset manually because the value is based on the time when gathered
for _, m := range acc.Metrics {
for k, v := range m.Fields {
if k == "rdb_last_save_time_elapsed" {
fields[k] = v
}
}
}
require.InDelta(t,
time.Now().Unix()-fields["rdb_last_save_time"].(int64),
fields["rdb_last_save_time_elapsed"].(int64),
2) // allow for 2 seconds worth of offset
keyspaceTags := map[string]string{"host": "redis.net", "replication_role": "master", "database": "db0"}
keyspaceFields := map[string]interface{}{
"avg_ttl": int64(0),
"expires": int64(0),
"keys": int64(2),
}
acc.AssertContainsTaggedFields(t, "redis", fields, tags)
acc.AssertContainsTaggedFields(t, "redis_keyspace", keyspaceFields, keyspaceTags)
cmdstatSetTags := map[string]string{"host": "redis.net", "replication_role": "master", "command": "set"}
cmdstatSetFields := map[string]interface{}{
"calls": int64(261265),
"usec": int64(1634157),
"usec_per_call": float64(6.25),
}
acc.AssertContainsTaggedFields(t, "redis_cmdstat", cmdstatSetFields, cmdstatSetTags)
cmdstatCommandTags := map[string]string{"host": "redis.net", "replication_role": "master", "command": "command"}
cmdstatCommandFields := map[string]interface{}{
"calls": int64(1),
"usec": int64(990),
"usec_per_call": float64(990.0),
}
acc.AssertContainsTaggedFields(t, "redis_cmdstat", cmdstatCommandFields, cmdstatCommandTags)
cmdstatPublishTags := map[string]string{"host": "redis.net", "replication_role": "master", "command": "publish"}
cmdstatPublishFields := map[string]interface{}{
"calls": int64(488662),
"usec": int64(8573493),
"usec_per_call": float64(17.54),
"rejected_calls": int64(0),
"failed_calls": int64(0),
}
acc.AssertContainsTaggedFields(t, "redis_cmdstat", cmdstatPublishFields, cmdstatPublishTags)
latencyZaddTags := map[string]string{"host": "redis.net", "replication_role": "master", "command": "zadd"}
latencyZaddFields := map[string]interface{}{
"p50": float64(9.023),
"p99": float64(28.031),
"p99.9": float64(43.007),
}
acc.AssertContainsTaggedFields(t, "redis_latency_percentiles_usec", latencyZaddFields, latencyZaddTags)
latencyHgetallTags := map[string]string{"host": "redis.net", "replication_role": "master", "command": "hgetall"}
latencyHgetallFields := map[string]interface{}{
"p50": float64(11.007),
"p99": float64(34.047),
"p99.9": float64(66.047),
}
acc.AssertContainsTaggedFields(t, "redis_latency_percentiles_usec", latencyHgetallFields, latencyHgetallTags)
replicationTags := map[string]string{
"host": "redis.net",
"replication_role": "slave",
"replica_id": "0",
"replica_ip": "127.0.0.1",
"replica_port": "7379",
"state": "online",
}
replicationFields := map[string]interface{}{
"lag": int64(0),
"offset": int64(4556468),
}
acc.AssertContainsTaggedFields(t, "redis_replication", replicationFields, replicationTags)
replicationTags = map[string]string{
"host": "redis.net",
"replication_role": "slave",
"replica_id": "1",
"replica_ip": "127.0.0.1",
"replica_port": "8379",
"state": "send_bulk",
}
replicationFields = map[string]interface{}{
"lag": int64(1),
"offset": int64(0),
}
acc.AssertContainsTaggedFields(t, "redis_replication", replicationFields, replicationTags)
errorStatsTags := map[string]string{"host": "redis.net", "replication_role": "master", "err": "MOVED"}
errorStatsFields := map[string]interface{}{"total": int64(3628)}
acc.AssertContainsTaggedFields(t, "redis_errorstat", errorStatsFields, errorStatsTags)
}
func TestRedis_ParseFloatOnInts(t *testing.T) {
var acc testutil.Accumulator
tags := map[string]string{"host": "redis.net"}
rdr := bufio.NewReader(strings.NewReader(strings.Replace(testOutput, "mem_fragmentation_ratio:0.81", "mem_fragmentation_ratio:1", 1)))
err := gatherInfoOutput(rdr, &acc, tags)
require.NoError(t, err)
var m *testutil.Metric
for i := range acc.Metrics {
if _, ok := acc.Metrics[i].Fields["mem_fragmentation_ratio"]; ok {
m = acc.Metrics[i]
break
}
}
require.NotNil(t, m)
fragRatio, ok := m.Fields["mem_fragmentation_ratio"]
require.True(t, ok)
require.IsType(t, float64(0.0), fragRatio)
}
func TestRedis_ParseIntOnFloats(t *testing.T) {
var acc testutil.Accumulator
tags := map[string]string{"host": "redis.net"}
rdr := bufio.NewReader(strings.NewReader(strings.Replace(testOutput, "clients_in_timeout_table:0", "clients_in_timeout_table:0.0", 1)))
err := gatherInfoOutput(rdr, &acc, tags)
require.NoError(t, err)
var m *testutil.Metric
for i := range acc.Metrics {
if _, ok := acc.Metrics[i].Fields["clients_in_timeout_table"]; ok {
m = acc.Metrics[i]
break
}
}
require.NotNil(t, m)
clientsInTimeout, ok := m.Fields["clients_in_timeout_table"]
require.True(t, ok)
require.IsType(t, int64(0), clientsInTimeout)
}
func TestRedis_ParseStringOnInts(t *testing.T) {
var acc testutil.Accumulator
tags := map[string]string{"host": "redis.net"}
rdr := bufio.NewReader(strings.NewReader(strings.Replace(testOutput, "maxmemory_policy:no-eviction", "maxmemory_policy:1", 1)))
err := gatherInfoOutput(rdr, &acc, tags)
require.NoError(t, err)
var m *testutil.Metric
for i := range acc.Metrics {
if _, ok := acc.Metrics[i].Fields["maxmemory_policy"]; ok {
m = acc.Metrics[i]
break
}
}
require.NotNil(t, m)
maxmemoryPolicy, ok := m.Fields["maxmemory_policy"]
require.True(t, ok)
require.IsType(t, string(""), maxmemoryPolicy)
}
func TestRedis_ParseIntOnString(t *testing.T) {
var acc testutil.Accumulator
tags := map[string]string{"host": "redis.net"}
rdr := bufio.NewReader(strings.NewReader(strings.Replace(testOutput, "clients_in_timeout_table:0", `clients_in_timeout_table:""`, 1)))
err := gatherInfoOutput(rdr, &acc, tags)
require.NoError(t, err)
var m *testutil.Metric
for i := range acc.Metrics {
if _, ok := acc.Metrics[i].Fields["clients_in_timeout_table"]; ok {
m = acc.Metrics[i]
break
}
}
require.NotNil(t, m)
clientsInTimeout, ok := m.Fields["clients_in_timeout_table"]
require.True(t, ok)
require.IsType(t, int64(0), clientsInTimeout)
}
func TestRedis_GatherErrorstatsLine(t *testing.T) {
var acc testutil.Accumulator
globalTags := map[string]string{}
gatherErrorStatsLine("FOO", "BAR", &acc, globalTags)
require.Len(t, acc.Errors, 1)
require.Equal(t, "invalid line for \"FOO\": BAR", acc.Errors[0].Error())
acc = testutil.Accumulator{}
gatherErrorStatsLine("FOO", "BAR=a", &acc, globalTags)
require.Len(t, acc.Errors, 1)
require.Equal(t, "parsing value in line \"BAR=a\" failed: strconv.ParseInt: parsing \"a\": invalid syntax", acc.Errors[0].Error())
acc = testutil.Accumulator{}
gatherErrorStatsLine("FOO", "BAR=77", &acc, globalTags)
require.Empty(t, acc.Errors)
}
const testOutput = `# Server
redis_version:6.0.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:26c3229b35eb3beb
redis_mode:standalone
os:Darwin 19.6.0 x86_64
arch_bits:64
multiplexing_api:kqueue
atomicvar_api:atomic-builtin
gcc_version:4.2.1
process_id:46677
run_id:5d6bf38087b23e48f1a59b7aca52e2b55438b02f
tcp_port:6379
uptime_in_seconds:238
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:2364819
executable:/usr/local/opt/redis/bin/redis-server
config_file:/usr/local/etc/redis.conf
io_threads_active:0
# Clients
client_recent_max_input_buffer:16
client_recent_max_output_buffer:0
tracking_clients:0
clients_in_timeout_table:0
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:1003936
used_memory_human:980.41K
used_memory_rss:811008
used_memory_rss_human:1.01M
used_memory_peak:1003936
used_memory_peak_human:980.41K
used_memory_peak_perc:93.58%
used_memory_overhead:1019152
used_memory_startup:1001712
used_memory_dataset:47088
used_memory_dataset_perc:20.27%
allocator_allocated:1019632
allocator_active:1022976
allocator_resident:1022976
total_system_memory:17179869184
total_system_memory_human:16.00G
used_memory_lua:33792
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:3344
allocator_rss_ratio:1.00
allocator_rss_bytes:0
rss_overhead_ratio:1.04
rss_overhead_bytes:37888
mem_fragmentation_ratio:0.81
mem_fragmentation_bytes:41232
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:17440
mem_aof_buffer:0
mem_allocator:libc
active_defrag_running:0
lazyfree_pending_objects:0
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1428427941
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
module_fork_in_progress:0
module_fork_last_cow_size:0
# Stats
total_connections_received:2
total_commands_processed:1
instantaneous_ops_per_sec:0
total_net_input_bytes:381
total_net_output_bytes:71521
instantaneous_input_kbps:876.16
instantaneous_output_kbps:3010.23
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:669
evicted_keys:0
keyspace_hits:1
keyspace_misses:1
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_reads_processed:31
total_writes_processed:17
io_threaded_reads_processed:0
io_threaded_writes_processed:0
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=7379,state=online,offset=4556468,lag=0
slave1:ip=127.0.0.1,port=8379,state=send_bulk,offset=0,lag=1
master_replid:8c4d7b768b26826825ceb20ff4a2c7c54616350b
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:0.14
used_cpu_user:0.05
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Cluster
cluster_enabled:0
# Commandstats
cmdstat_set:calls=261265,usec=1634157,usec_per_call=6.25
cmdstat_command:calls=1,usec=990,usec_per_call=990.00
cmdstat_publish:calls=488662,usec=8573493,usec_per_call=17.54,rejected_calls=0,failed_calls=0
# Errorstats
errorstat_CLUSTERDOWN:count=8
errorstat_CROSSSLOT:count=3
errorstat_ERR:count=172
errorstat_LOADING:count=4284
errorstat_MASTERDOWN:count=102
errorstat_MOVED:count=3628
errorstat_NOSCRIPT:count=4
errorstat_WRONGPASS:count=2
errorstat_WRONGTYPE:count=30
# Latencystats
latency_percentiles_usec_zadd:p50=9.023,p99=28.031,p99.9=43.007
latency_percentiles_usec_hgetall:p50=11.007,p99=34.047,p99.9=66.047
# Keyspace
db0:keys=2,expires=0,avg_ttl=0
(error) ERR unknown command 'eof'`

View file

@ -0,0 +1,37 @@
# Read metrics from one or many redis servers
[[inputs.redis]]
## specify servers via a url matching:
## [protocol://][username:password]@address[:port]
## e.g.
## tcp://localhost:6379
## tcp://username:password@192.168.99.100
## unix:///var/run/redis.sock
##
## If no servers are specified, then localhost is used as the host.
## If no port is specified, 6379 is used
servers = ["tcp://localhost:6379"]
## Optional. Specify redis commands to retrieve values
# [[inputs.redis.commands]]
# # The command to run where each argument is a separate element
# command = ["get", "sample-key"]
# # The field to store the result in
# field = "sample-key-value"
# # The type of the result
# # Can be "string", "integer", or "float"
# type = "string"
## Specify username and password for ACL auth (Redis 6.0+). You can add this
## to the server URI above or specify it here. The values here take
## precedence.
# username = ""
# password = ""
## Optional TLS Config
## Check tls/config.go ClientConfig for more options
# tls_enable = true
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = true