1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,152 @@
# Memcached Input Plugin
This plugin gathers statistics data from [Memcached][memcached] instances.
⭐ Telegraf v0.1.2
🏷️ server
💻 all
[memcached]: https://memcached.org/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Configuration
```toml @sample.conf
# Read metrics from one or many memcached servers.
[[inputs.memcached]]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.0.0.1:11211, etc.
servers = ["localhost:11211"]
# An array of unix memcached sockets to gather stats about.
# unix_sockets = ["/var/run/memcached.sock"]
## Optional TLS Config
# enable_tls = false
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## If false, skip chain & host verification
# insecure_skip_verify = true
```
## Metrics
The fields from this plugin are gathered in the *memcached* measurement.
Fields:
* accepting_conns - Whether or not server is accepting conns
* auth_cmds - Number of authentication commands handled, success or failure
* auth_errors - Number of failed authentications
* bytes - Current number of bytes used to store items
* bytes_read - Total number of bytes read by this server from network
* bytes_written - Total number of bytes sent by this server to network
* cas_badval - Number of CAS reqs for which a key was found, but the CAS value
did not match
* cas_hits - Number of successful CAS reqs
* cas_misses - Number of CAS reqs against missing keys
* cmd_flush - Cumulative number of flush reqs
* cmd_get - Cumulative number of retrieval reqs
* cmd_set - Cumulative number of storage reqs
* cmd_touch - Cumulative number of touch reqs
* conn_yields - Number of times any connection yielded to another due to
hitting the -R limit
* connection_structures - Number of connection structures allocated by the
server
* curr_connections - Number of open connections
* curr_items - Current number of items stored
* decr_hits - Number of successful decr reqs
* decr_misses - Number of decr reqs against missing keys
* delete_hits - Number of deletion reqs resulting in an item being removed
* delete_misses - Number of deletions reqs for missing keys
* evicted_active - Items evicted from LRU that had been hit recently but did
not jump to top of LRU
* evicted_unfetched - Items evicted from LRU that were never touched by
get/incr/append/etc
* evictions - Number of valid items removed from cache to free memory for
new items
* expired_unfetched - Items pulled from LRU that were never touched by
get/incr/append/etc before expiring
* extstore_compact_lost - The number of objects lost during the compaction process. This happens when objects couldn't be rescued or moved to other pages before they were overwritten or evicted.
* extstore_compact_rescues - The total number of objects successfully rescued during the compaction process, meaning they were moved to another page instead of being discarded.
* extstore_compact_resc_cold - The number of cold objects (rarely accessed) rescued during the compaction process.
* extstore_compact_resc_old - The number of older objects (likely less frequently accessed) rescued during the compaction process.
* extstore_compact_skipped - The number of compaction operations skipped, often due to the page not requiring compaction or other conditions preventing it.
* extstore_page_allocs - The total number of pages allocated in the external storage system.
* extstore_page_evictions - The total number of pages evicted (removed) from external storage, generally to free up space.
* extstore_page_reclaims - The total number of previously evicted pages that were reclaimed and reused.
* extstore_pages_free - The number of pages currently free (unallocated) in the external storage.
* extstore_pages_used - The number of pages currently in use in the external storage system.
* extstore_objects_evicted - The total number of objects evicted from external storage, typically to free up space.
* extstore_objects_read - The total number of objects read from external storage.
* extstore_objects_written - The total number of objects written to external storage.
* extstore_objects_used - The number of active objects currently in use in the external storage.
* extstore_bytes_evicted - The total number of bytes evicted from external storage.
* extstore_bytes_written - The total number of bytes written to external storage.
* extstore_bytes_read - The total number of bytes read from external storage.
* extstore_bytes_used - The total number of bytes currently in use in external storage.
* extstore_bytes_fragmented - The total number of fragmented bytes in external storage, representing space that is allocated but not fully utilized.
* extstore_limit_maxbytes - The maximum limit of bytes that external storage can use.
* extstore_io_queue - The current length of the I/O queue, representing pending input/output operations for external storage.
* get_expired - Number of items that have been requested but had already
expired
* get_flushed - Number of items that have been requested but have been flushed
via flush_all
* get_hits - Number of keys that have been requested and found present
* get_misses - Number of items that have been requested and not found
* hash_bytes - Bytes currently used by hash tables
* hash_is_expanding - Indicates if the hash table is being grown to a new size
* hash_power_level - Current size multiplier for hash table
* incr_hits - Number of successful incr reqs
* incr_misses - Number of incr reqs against missing keys
* limit_maxbytes - Number of bytes this server is allowed to use for storage
* listen_disabled_num - Number of times server has stopped accepting new
connections (maxconns)
* max_connections - Max number of simultaneous connections
* reclaimed - Number of times an entry was stored using memory from an
expired entry
* rejected_connections - Conns rejected in maxconns_fast mode
* store_no_memory - Number of rejected storage requests caused by exhaustion
of the memory limit when evictions are disabled
* store_too_large - Number of rejected storage requests caused by attempting
to write a value larger than the item size limit
* threads - Number of worker threads requested
* total_connections - Total number of connections opened since the server
started running
* total_items - Total number of items stored since the server started
* touch_hits - Number of keys that have been touched with a new expiration time
* touch_misses - Number of items that have been touched and not found
* uptime - Number of secs since the server started
Description of gathered fields taken from [memcached protocol docs][protocol].
[protocol]: https://github.com/memcached/memcached/blob/master/doc/protocol.txt
## Tags
* Memcached measurements have the following tags:
* server (the host name from which metrics are gathered)
## Sample Queries
You can use the following query to get the average get hit and miss ratio, as
well as the total average size of cached items, number of cached items and
average connection counts per server.
```sql
SELECT mean(get_hits) / mean(cmd_get) as get_ratio, mean(get_misses) / mean(cmd_get) as get_misses_ratio, mean(bytes), mean(curr_items), mean(curr_connections) FROM memcached WHERE time > now() - 1h GROUP BY server
```
## Example Output
```text
memcached,server=localhost:11211 accepting_conns=1i,auth_cmds=0i,auth_errors=0i,bytes=0i,bytes_read=7i,bytes_written=0i,cas_badval=0i,cas_hits=0i,cas_misses=0i,cmd_flush=0i,cmd_get=0i,cmd_set=0i,cmd_touch=0i,conn_yields=0i,connection_structures=3i,curr_connections=2i,curr_items=0i,decr_hits=0i,decr_misses=0i,delete_hits=0i,delete_misses=0i,evicted_active=0i,evicted_unfetched=0i,evictions=0i,expired_unfetched=0i,get_expired=0i,get_flushed=0i,get_hits=0i,get_misses=0i,hash_bytes=524288i,hash_is_expanding=0i,hash_power_level=16i,incr_hits=0i,incr_misses=0i,limit_maxbytes=67108864i,listen_disabled_num=0i,max_connections=1024i,reclaimed=0i,rejected_connections=0i,store_no_memory=0i,store_too_large=0i,threads=4i,total_connections=3i,total_items=0i,touch_hits=0i,touch_misses=0i,uptime=3i 1644771989000000000
```

View file

@ -0,0 +1,238 @@
//go:generate ../../../tools/readme_config_includer/generator
package memcached
import (
"bufio"
"bytes"
"crypto/tls"
_ "embed"
"errors"
"fmt"
"net"
"strconv"
"time"
"golang.org/x/net/proxy"
"github.com/influxdata/telegraf"
common_tls "github.com/influxdata/telegraf/plugins/common/tls"
"github.com/influxdata/telegraf/plugins/inputs"
)
//go:embed sample.conf
var sampleConfig string
var (
defaultTimeout = 5 * time.Second
// The list of metrics that should be sent
sendMetrics = []string{
"accepting_conns",
"auth_cmds",
"auth_errors",
"bytes",
"bytes_read",
"bytes_written",
"cas_badval",
"cas_hits",
"cas_misses",
"cmd_flush",
"cmd_get",
"cmd_set",
"cmd_touch",
"conn_yields",
"connection_structures",
"curr_connections",
"curr_items",
"decr_hits",
"decr_misses",
"delete_hits",
"delete_misses",
"evicted_active",
"evicted_unfetched",
"evictions",
"expired_unfetched",
"extstore_compact_lost",
"extstore_compact_rescues",
"extstore_compact_resc_cold",
"extstore_compact_resc_old",
"extstore_compact_skipped",
"extstore_page_allocs",
"extstore_page_evictions",
"extstore_page_reclaims",
"extstore_pages_free",
"extstore_pages_used",
"extstore_objects_evicted",
"extstore_objects_read",
"extstore_objects_written",
"extstore_objects_used",
"extstore_bytes_evicted",
"extstore_bytes_written",
"extstore_bytes_read",
"extstore_bytes_used",
"extstore_bytes_fragmented",
"extstore_limit_maxbytes",
"extstore_io_queue",
"get_expired",
"get_flushed",
"get_hits",
"get_misses",
"hash_bytes",
"hash_is_expanding",
"hash_power_level",
"incr_hits",
"incr_misses",
"limit_maxbytes",
"listen_disabled_num",
"max_connections",
"reclaimed",
"rejected_connections",
"store_no_memory",
"store_too_large",
"threads",
"total_connections",
"total_items",
"touch_hits",
"touch_misses",
"uptime",
}
)
type Memcached struct {
Servers []string `toml:"servers"`
UnixSockets []string `toml:"unix_sockets"`
EnableTLS bool `toml:"enable_tls"`
common_tls.ClientConfig
}
func (*Memcached) SampleConfig() string {
return sampleConfig
}
func (m *Memcached) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 && len(m.UnixSockets) == 0 {
return m.gatherServer(":11211", false, acc)
}
for _, serverAddress := range m.Servers {
acc.AddError(m.gatherServer(serverAddress, false, acc))
}
for _, unixAddress := range m.UnixSockets {
acc.AddError(m.gatherServer(unixAddress, true, acc))
}
return nil
}
func (m *Memcached) gatherServer(address string, unix bool, acc telegraf.Accumulator) error {
var conn net.Conn
var err error
var dialer proxy.Dialer
dialer = &net.Dialer{Timeout: defaultTimeout}
if m.EnableTLS {
tlsCfg, err := m.ClientConfig.TLSConfig()
if err != nil {
return err
}
dialer = &tls.Dialer{
NetDialer: dialer.(*net.Dialer),
Config: tlsCfg,
}
}
if unix {
conn, err = dialer.Dial("unix", address)
if err != nil {
return err
}
defer conn.Close()
} else {
_, _, err = net.SplitHostPort(address)
if err != nil {
address = address + ":11211"
}
conn, err = dialer.Dial("tcp", address)
if err != nil {
return err
}
defer conn.Close()
}
if conn == nil {
return errors.New("failed to create net connection")
}
// Extend connection
if err := conn.SetDeadline(time.Now().Add(defaultTimeout)); err != nil {
return err
}
// Read and write buffer
rw := bufio.NewReadWriter(bufio.NewReader(conn), bufio.NewWriter(conn))
// Send command
if _, err := fmt.Fprint(rw, "stats\r\n"); err != nil {
return err
}
if err := rw.Flush(); err != nil {
return err
}
values, err := parseResponse(rw.Reader)
if err != nil {
return err
}
// Add server address as a tag
tags := map[string]string{"server": address}
// Process values
fields := make(map[string]interface{})
for _, key := range sendMetrics {
if value, ok := values[key]; ok {
// Mostly it is the number
if iValue, errParse := strconv.ParseInt(value, 10, 64); errParse == nil {
fields[key] = iValue
} else {
fields[key] = value
}
}
}
acc.AddFields("memcached", fields, tags)
return nil
}
func parseResponse(r *bufio.Reader) (map[string]string, error) {
values := make(map[string]string)
for {
// Read line
line, _, errRead := r.ReadLine()
if errRead != nil {
return values, errRead
}
// Done
if bytes.Equal(line, []byte("END")) {
break
}
// Read values
s := bytes.SplitN(line, []byte(" "), 3)
if len(s) != 3 || !bytes.Equal(s[0], []byte("STAT")) {
return values, fmt.Errorf("unexpected line in stats response: %q", line)
}
// Save values
values[string(s[1])] = string(s[2])
}
return values, nil
}
func init() {
inputs.Add("memcached", func() telegraf.Input {
return &Memcached{}
})
}

View file

@ -0,0 +1,303 @@
package memcached
import (
"bufio"
"fmt"
"strings"
"testing"
"github.com/docker/go-connections/nat"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go/wait"
"github.com/influxdata/telegraf/testutil"
)
func TestMemcachedGeneratesMetricsIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
servicePort := "11211"
container := testutil.Container{
Image: "memcached",
ExposedPorts: []string{servicePort},
WaitingFor: wait.ForListeningPort(nat.Port(servicePort)),
}
err := container.Start()
require.NoError(t, err, "failed to start container")
defer container.Terminate()
m := &Memcached{
Servers: []string{fmt.Sprintf("%s:%s", container.Address, container.Ports[servicePort])},
}
var acc testutil.Accumulator
err = acc.GatherError(m.Gather)
require.NoError(t, err)
intMetrics := []string{"get_hits", "get_misses", "evictions",
"limit_maxbytes", "bytes", "uptime", "curr_items", "total_items",
"curr_connections", "total_connections", "connection_structures", "cmd_get",
"cmd_set", "delete_hits", "delete_misses", "incr_hits", "incr_misses",
"decr_hits", "decr_misses", "cas_hits", "cas_misses",
"bytes_read", "bytes_written", "threads", "conn_yields"}
for _, metric := range intMetrics {
require.True(t, acc.HasInt64Field("memcached", metric), metric)
}
}
func TestMemcachedParseMetrics(t *testing.T) {
r := bufio.NewReader(strings.NewReader(memcachedStats))
values, err := parseResponse(r)
require.NoError(t, err, "Error parsing memcached response")
tests := []struct {
key string
value string
}{
{"pid", "5619"},
{"uptime", "11"},
{"time", "1644765868"},
{"version", "1.6.14_5_ge03751b"},
{"libevent", "2.1.11-stable"},
{"pointer_size", "64"},
{"rusage_user", "0.080905"},
{"rusage_system", "0.059330"},
{"max_connections", "1024"},
{"curr_connections", "2"},
{"total_connections", "3"},
{"rejected_connections", "0"},
{"connection_structures", "3"},
{"response_obj_oom", "0"},
{"response_obj_count", "1"},
{"response_obj_bytes", "16384"},
{"read_buf_count", "2"},
{"read_buf_bytes", "32768"},
{"read_buf_bytes_free", "0"},
{"read_buf_oom", "0"},
{"reserved_fds", "20"},
{"cmd_get", "0"},
{"cmd_set", "0"},
{"cmd_flush", "0"},
{"cmd_touch", "0"},
{"cmd_meta", "0"},
{"get_hits", "0"},
{"get_misses", "0"},
{"get_expired", "0"},
{"get_flushed", "0"},
{"delete_misses", "0"},
{"delete_hits", "0"},
{"incr_misses", "0"},
{"incr_hits", "0"},
{"decr_misses", "0"},
{"decr_hits", "0"},
{"cas_misses", "0"},
{"cas_hits", "0"},
{"cas_badval", "0"},
{"touch_hits", "0"},
{"touch_misses", "0"},
{"store_too_large", "0"},
{"store_no_memory", "0"},
{"auth_cmds", "0"},
{"auth_errors", "0"},
{"bytes_read", "6"},
{"bytes_written", "0"},
{"limit_maxbytes", "67108864"},
{"accepting_conns", "1"},
{"listen_disabled_num", "0"},
{"time_in_listen_disabled_us", "0"},
{"threads", "4"},
{"conn_yields", "0"},
{"hash_power_level", "16"},
{"hash_bytes", "524288"},
{"hash_is_expanding", "0"},
{"slab_reassign_rescues", "0"},
{"slab_reassign_chunk_rescues", "0"},
{"slab_reassign_evictions_nomem", "0"},
{"slab_reassign_inline_reclaim", "0"},
{"slab_reassign_busy_items", "0"},
{"slab_reassign_busy_deletes", "0"},
{"slab_reassign_running", "0"},
{"slabs_moved", "0"},
{"lru_crawler_running", "0"},
{"lru_crawler_starts", "1"},
{"lru_maintainer_juggles", "60"},
{"malloc_fails", "0"},
{"log_worker_dropped", "0"},
{"log_worker_written", "0"},
{"log_watcher_skipped", "0"},
{"log_watcher_sent", "0"},
{"log_watchers", "0"},
{"extstore_compact_lost", "3287"},
{"extstore_compact_rescues", "47014"},
{"extstore_compact_resc_cold", "0"},
{"extstore_compact_resc_old", "0"},
{"extstore_compact_skipped", "0"},
{"extstore_page_allocs", "30047"},
{"extstore_page_evictions", "25315"},
{"extstore_page_reclaims", "29247"},
{"extstore_pages_free", "0"},
{"extstore_pages_used", "800"},
{"extstore_objects_evicted", "1243091"},
{"extstore_objects_read", "938410"},
{"extstore_objects_written", "1487003"},
{"extstore_objects_used", "39319"},
{"extstore_bytes_evicted", "1638804587744"},
{"extstore_bytes_written", "1951205770118"},
{"extstore_bytes_read", "1249921752566"},
{"extstore_bytes_used", "51316205305"},
{"extstore_bytes_fragmented", "2370885895"},
{"extstore_limit_maxbytes", "53687091200"},
{"extstore_io_queue", "0"},
{"unexpected_napi_ids", "0"},
{"round_robin_fallback", "0"},
{"bytes", "0"},
{"curr_items", "0"},
{"total_items", "0"},
{"slab_global_page_pool", "0"},
{"expired_unfetched", "0"},
{"evicted_unfetched", "0"},
{"evicted_active", "0"},
{"evictions", "0"},
{"reclaimed", "0"},
{"crawler_reclaimed", "0"},
{"crawler_items_checked", "0"},
{"lrutail_reflocked", "0"},
{"moves_to_cold", "0"},
{"moves_to_warm", "0"},
{"moves_within_lru", "0"},
{"direct_reclaims", "0"},
{"lru_bumps_dropped", "0"},
}
for _, test := range tests {
value, ok := values[test.key]
if !ok {
t.Errorf("Did not find key for metric %s in values", test.key)
continue
}
if value != test.value {
t.Errorf("Metric: %s, Expected: %s, actual: %s",
test.key, test.value, value)
}
}
}
var memcachedStats = `STAT pid 5619
STAT uptime 11
STAT time 1644765868
STAT version 1.6.14_5_ge03751b
STAT libevent 2.1.11-stable
STAT pointer_size 64
STAT rusage_user 0.080905
STAT rusage_system 0.059330
STAT max_connections 1024
STAT curr_connections 2
STAT total_connections 3
STAT rejected_connections 0
STAT connection_structures 3
STAT response_obj_oom 0
STAT response_obj_count 1
STAT response_obj_bytes 16384
STAT read_buf_count 2
STAT read_buf_bytes 32768
STAT read_buf_bytes_free 0
STAT read_buf_oom 0
STAT reserved_fds 20
STAT cmd_get 0
STAT cmd_set 0
STAT cmd_flush 0
STAT cmd_touch 0
STAT cmd_meta 0
STAT get_hits 0
STAT get_misses 0
STAT get_expired 0
STAT get_flushed 0
STAT delete_misses 0
STAT delete_hits 0
STAT incr_misses 0
STAT incr_hits 0
STAT decr_misses 0
STAT decr_hits 0
STAT cas_misses 0
STAT cas_hits 0
STAT cas_badval 0
STAT touch_hits 0
STAT touch_misses 0
STAT store_too_large 0
STAT store_no_memory 0
STAT auth_cmds 0
STAT auth_errors 0
STAT bytes_read 6
STAT bytes_written 0
STAT limit_maxbytes 67108864
STAT accepting_conns 1
STAT listen_disabled_num 0
STAT time_in_listen_disabled_us 0
STAT threads 4
STAT conn_yields 0
STAT hash_power_level 16
STAT hash_bytes 524288
STAT hash_is_expanding 0
STAT slab_reassign_rescues 0
STAT slab_reassign_chunk_rescues 0
STAT slab_reassign_evictions_nomem 0
STAT slab_reassign_inline_reclaim 0
STAT slab_reassign_busy_items 0
STAT slab_reassign_busy_deletes 0
STAT slab_reassign_running 0
STAT slabs_moved 0
STAT lru_crawler_running 0
STAT lru_crawler_starts 1
STAT lru_maintainer_juggles 60
STAT malloc_fails 0
STAT log_worker_dropped 0
STAT log_worker_written 0
STAT log_watcher_skipped 0
STAT log_watcher_sent 0
STAT log_watchers 0
STAT extstore_compact_lost 3287
STAT extstore_compact_rescues 47014
STAT extstore_compact_resc_cold 0
STAT extstore_compact_resc_old 0
STAT extstore_compact_skipped 0
STAT extstore_page_allocs 30047
STAT extstore_page_evictions 25315
STAT extstore_page_reclaims 29247
STAT extstore_pages_free 0
STAT extstore_pages_used 800
STAT extstore_objects_evicted 1243091
STAT extstore_objects_read 938410
STAT extstore_objects_written 1487003
STAT extstore_objects_used 39319
STAT extstore_bytes_evicted 1638804587744
STAT extstore_bytes_written 1951205770118
STAT extstore_bytes_read 1249921752566
STAT extstore_bytes_used 51316205305
STAT extstore_bytes_fragmented 2370885895
STAT extstore_limit_maxbytes 53687091200
STAT extstore_io_queue 0
STAT unexpected_napi_ids 0
STAT round_robin_fallback 0
STAT bytes 0
STAT curr_items 0
STAT total_items 0
STAT slab_global_page_pool 0
STAT expired_unfetched 0
STAT evicted_unfetched 0
STAT evicted_active 0
STAT evictions 0
STAT reclaimed 0
STAT crawler_reclaimed 0
STAT crawler_items_checked 0
STAT lrutail_reflocked 0
STAT moves_to_cold 0
STAT moves_to_warm 0
STAT moves_within_lru 0
STAT direct_reclaims 0
STAT lru_bumps_dropped 0
END
`

View file

@ -0,0 +1,15 @@
# Read metrics from one or many memcached servers.
[[inputs.memcached]]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.0.0.1:11211, etc.
servers = ["localhost:11211"]
# An array of unix memcached sockets to gather stats about.
# unix_sockets = ["/var/run/memcached.sock"]
## Optional TLS Config
# enable_tls = false
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## If false, skip chain & host verification
# insecure_skip_verify = true