1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,187 @@
# Logstash Input Plugin
This plugin gathers metrics from a [Logstash][logstash] endpoint using the
[Monitoring API][logstash_api].
> [!NOTE]
> This plugin supports Logstash 5+.
⭐ Telegraf v1.12.0
🏷️ server
💻 all
[logstash]: https://www.elastic.co/logstash
[logstash_api]: https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Configuration
```toml @sample.conf
# Read metrics exposed by Logstash
[[inputs.logstash]]
## The URL of the exposed Logstash API endpoint.
url = "http://127.0.0.1:9600"
## Use Logstash 5 single pipeline API, set to true when monitoring
## Logstash 5.
# single_pipeline = false
## Enable optional collection components. Can contain
## "pipelines", "process", and "jvm".
# collect = ["pipelines", "process", "jvm"]
## Timeout for HTTP requests.
# timeout = "5s"
## Optional HTTP Basic Auth credentials.
# username = "username"
# password = "pa$$word"
## Optional TLS Config.
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification.
# insecure_skip_verify = false
## If 'use_system_proxy' is set to true, Telegraf will check env vars such as
## HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or their lowercase counterparts).
## If 'use_system_proxy' is set to false (default) and 'http_proxy_url' is
## provided, Telegraf will use the specified URL as HTTP proxy.
# use_system_proxy = false
# http_proxy_url = "http://localhost:8888"
## Optional HTTP headers.
# [inputs.logstash.headers]
# "X-Special-Header" = "Special-Value"
```
## Metrics
Additional plugin stats may be collected (because logstash doesn't consistently
expose all stats)
- logstash_jvm
- tags:
- node_id
- node_name
- node_host
- node_version
- fields:
- threads_peak_count
- mem_pools_survivor_peak_max_in_bytes
- mem_pools_survivor_max_in_bytes
- mem_pools_old_peak_used_in_bytes
- mem_pools_young_used_in_bytes
- mem_non_heap_committed_in_bytes
- threads_count
- mem_pools_old_committed_in_bytes
- mem_pools_young_peak_max_in_bytes
- mem_heap_used_percent
- gc_collectors_young_collection_time_in_millis
- mem_pools_survivor_peak_used_in_bytes
- mem_pools_young_committed_in_bytes
- gc_collectors_old_collection_time_in_millis
- gc_collectors_old_collection_count
- mem_pools_survivor_used_in_bytes
- mem_pools_old_used_in_bytes
- mem_pools_young_max_in_bytes
- mem_heap_max_in_bytes
- mem_non_heap_used_in_bytes
- mem_pools_survivor_committed_in_bytes
- mem_pools_old_max_in_bytes
- mem_heap_committed_in_bytes
- mem_pools_old_peak_max_in_bytes
- mem_pools_young_peak_used_in_bytes
- mem_heap_used_in_bytes
- gc_collectors_young_collection_count
- uptime_in_millis
- logstash_process
- tags:
- node_id
- node_name
- source
- node_version
- fields:
- open_file_descriptors
- cpu_load_average_1m
- cpu_load_average_5m
- cpu_load_average_15m
- cpu_total_in_millis
- cpu_percent
- peak_open_file_descriptors
- max_file_descriptors
- mem_total_virtual_in_bytes
- mem_total_virtual_in_bytes
- logstash_events
- tags:
- node_id
- node_name
- source
- node_version
- pipeline (for Logstash 6+)
- fields:
- queue_push_duration_in_millis
- duration_in_millis
- in
- filtered
- out
- logstash_plugins
- tags:
- node_id
- node_name
- source
- node_version
- pipeline (for Logstash 6+)
- plugin_id
- plugin_name
- plugin_type
- fields:
- queue_push_duration_in_millis (for input plugins only)
- duration_in_millis
- in
- out
- failures(if exists)
- bulk_requests_failures (for Logstash 7+)
- bulk_requests_with_errors (for Logstash 7+)
- documents_successes (for logstash 7+)
- documents_retryable_failures (for logstash 7+)
- logstash_queue
- tags:
- node_id
- node_name
- source
- node_version
- pipeline (for Logstash 6+)
- queue_type
- fields:
- events
- free_space_in_bytes
- max_queue_size_in_bytes
- max_unread_events
- page_capacity_in_bytes
- queue_size_in_bytes
## Example Output
```text
logstash_jvm,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt gc_collectors_old_collection_count=2,gc_collectors_old_collection_time_in_millis=100,gc_collectors_young_collection_count=26,gc_collectors_young_collection_time_in_millis=1028,mem_heap_committed_in_bytes=1056309248,mem_heap_max_in_bytes=1056309248,mem_heap_used_in_bytes=207216328,mem_heap_used_percent=19,mem_non_heap_committed_in_bytes=160878592,mem_non_heap_used_in_bytes=140838184,mem_pools_old_committed_in_bytes=899284992,mem_pools_old_max_in_bytes=899284992,mem_pools_old_peak_max_in_bytes=899284992,mem_pools_old_peak_used_in_bytes=189468088,mem_pools_old_used_in_bytes=189468088,mem_pools_survivor_committed_in_bytes=17432576,mem_pools_survivor_max_in_bytes=17432576,mem_pools_survivor_peak_max_in_bytes=17432576,mem_pools_survivor_peak_used_in_bytes=17432576,mem_pools_survivor_used_in_bytes=12572640,mem_pools_young_committed_in_bytes=139591680,mem_pools_young_max_in_bytes=139591680,mem_pools_young_peak_max_in_bytes=139591680,mem_pools_young_peak_used_in_bytes=139591680,mem_pools_young_used_in_bytes=5175600,threads_count=20,threads_peak_count=24,uptime_in_millis=739089 1566425244000000000
logstash_process,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt cpu_load_average_15m=0.03,cpu_load_average_1m=0.01,cpu_load_average_5m=0.04,cpu_percent=0,cpu_total_in_millis=83230,max_file_descriptors=16384,mem_total_virtual_in_bytes=3689132032,open_file_descriptors=118,peak_open_file_descriptors=118 1566425244000000000
logstash_events,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,source=debian-stretch-logstash6.virt duration_in_millis=0,filtered=0,in=0,out=0,queue_push_duration_in_millis=0 1566425244000000000
logstash_plugins,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,plugin_id=2807cb8610ba7854efa9159814fcf44c3dda762b43bd088403b30d42c88e69ab,plugin_name=beats,plugin_type=input,source=debian-stretch-logstash6.virt out=0,queue_push_duration_in_millis=0 1566425244000000000
logstash_plugins,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,plugin_id=7a6c973366186a695727c73935634a00bccd52fceedf30d0746983fce572d50c,plugin_name=file,plugin_type=output,source=debian-stretch-logstash6.virt duration_in_millis=0,in=0,out=0 1566425244000000000
logstash_queue,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,queue_type=memory,source=debian-stretch-logstash6.virt events=0 1566425244000000000
```

View file

@ -0,0 +1,519 @@
//go:generate ../../../tools/readme_config_includer/generator
package logstash
import (
"context"
_ "embed"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/internal/choice"
common_http "github.com/influxdata/telegraf/plugins/common/http"
"github.com/influxdata/telegraf/plugins/inputs"
parsers_json "github.com/influxdata/telegraf/plugins/parsers/json"
)
//go:embed sample.conf
var sampleConfig string
const (
jvmStatsNode = "/_node/stats/jvm"
processStatsNode = "/_node/stats/process"
pipelinesStatsNode = "/_node/stats/pipelines"
pipelineStatsNode = "/_node/stats/pipeline"
)
type Logstash struct {
URL string `toml:"url"`
SinglePipeline bool `toml:"single_pipeline"`
Collect []string `toml:"collect"`
Username string `toml:"username"`
Password string `toml:"password"`
Headers map[string]string `toml:"headers"`
Log telegraf.Logger `toml:"-"`
client *http.Client
common_http.HTTPClientConfig
}
type processStats struct {
ID string `json:"id"`
Process interface{} `json:"process"`
Name string `json:"name"`
Host string `json:"host"`
Version string `json:"version"`
}
type jvmStats struct {
ID string `json:"id"`
JVM interface{} `json:"jvm"`
Name string `json:"name"`
Host string `json:"host"`
Version string `json:"version"`
}
type pipelinesStats struct {
ID string `json:"id"`
Pipelines map[string]pipeline `json:"pipelines"`
Name string `json:"name"`
Host string `json:"host"`
Version string `json:"version"`
}
type pipelineStats struct {
ID string `json:"id"`
Pipeline pipeline `json:"pipeline"`
Name string `json:"name"`
Host string `json:"host"`
Version string `json:"version"`
}
type pipeline struct {
Events interface{} `json:"events"`
Plugins pipelinePlugins `json:"plugins"`
Reloads interface{} `json:"reloads"`
Queue pipelineQueue `json:"queue"`
}
type plugin struct {
ID string `json:"id"`
Events interface{} `json:"events"`
Name string `json:"name"`
Failures *int64 `json:"failures,omitempty"`
BulkRequests map[string]interface{} `json:"bulk_requests"`
Documents map[string]interface{} `json:"documents"`
}
type pipelinePlugins struct {
Inputs []plugin `json:"inputs"`
Filters []plugin `json:"filters"`
Outputs []plugin `json:"outputs"`
}
type pipelineQueue struct {
Events float64 `json:"events"`
EventsCount *float64 `json:"events_count"`
Type string `json:"type"`
Capacity interface{} `json:"capacity"`
Data interface{} `json:"data"`
QueueSizeInBytes *float64 `json:"queue_size_in_bytes"`
MaxQueueSizeInBytes *float64 `json:"max_queue_size_in_bytes"`
}
func (*Logstash) SampleConfig() string {
return sampleConfig
}
func (logstash *Logstash) Init() error {
err := choice.CheckSlice(logstash.Collect, []string{"pipelines", "process", "jvm"})
if err != nil {
return fmt.Errorf(`cannot verify "collect" setting: %w`, err)
}
return nil
}
func (*Logstash) Start(telegraf.Accumulator) error {
return nil
}
func (logstash *Logstash) Gather(accumulator telegraf.Accumulator) error {
if logstash.client == nil {
client, err := logstash.createHTTPClient()
if err != nil {
return err
}
logstash.client = client
}
if choice.Contains("jvm", logstash.Collect) {
jvmURL, err := url.Parse(logstash.URL + jvmStatsNode)
if err != nil {
return err
}
if err := logstash.gatherJVMStats(jvmURL.String(), accumulator); err != nil {
return err
}
}
if choice.Contains("process", logstash.Collect) {
processURL, err := url.Parse(logstash.URL + processStatsNode)
if err != nil {
return err
}
if err := logstash.gatherProcessStats(processURL.String(), accumulator); err != nil {
return err
}
}
if choice.Contains("pipelines", logstash.Collect) {
if logstash.SinglePipeline {
pipelineURL, err := url.Parse(logstash.URL + pipelineStatsNode)
if err != nil {
return err
}
if err := logstash.gatherPipelineStats(pipelineURL.String(), accumulator); err != nil {
return err
}
} else {
pipelinesURL, err := url.Parse(logstash.URL + pipelinesStatsNode)
if err != nil {
return err
}
if err := logstash.gatherPipelinesStats(pipelinesURL.String(), accumulator); err != nil {
return err
}
}
}
return nil
}
func (logstash *Logstash) Stop() {
if logstash.client != nil {
logstash.client.CloseIdleConnections()
}
}
// createHTTPClient create a clients to access API
func (logstash *Logstash) createHTTPClient() (*http.Client, error) {
ctx := context.Background()
return logstash.HTTPClientConfig.CreateClient(ctx, logstash.Log)
}
// gatherJSONData query the data source and parse the response JSON
func (logstash *Logstash) gatherJSONData(address string, value interface{}) error {
request, err := http.NewRequest("GET", address, nil)
if err != nil {
return err
}
if (logstash.Username != "") || (logstash.Password != "") {
request.SetBasicAuth(logstash.Username, logstash.Password)
}
for header, value := range logstash.Headers {
if strings.EqualFold(header, "host") {
request.Host = value
} else {
request.Header.Add(header, value)
}
}
response, err := logstash.client.Do(request)
if err != nil {
return err
}
defer response.Body.Close()
if response.StatusCode != http.StatusOK {
//nolint:errcheck // LimitReader returns io.EOF and we're not interested in read errors.
body, _ := io.ReadAll(io.LimitReader(response.Body, 200))
return fmt.Errorf("%s returned HTTP status %s: %q", address, response.Status, body)
}
err = json.NewDecoder(response.Body).Decode(value)
if err != nil {
return err
}
return nil
}
// gatherJVMStats gather the JVM metrics and add results to the accumulator
func (logstash *Logstash) gatherJVMStats(address string, accumulator telegraf.Accumulator) error {
jvmStats := &jvmStats{}
err := logstash.gatherJSONData(address, jvmStats)
if err != nil {
return err
}
tags := map[string]string{
"node_id": jvmStats.ID,
"node_name": jvmStats.Name,
"node_version": jvmStats.Version,
"source": jvmStats.Host,
}
flattener := parsers_json.JSONFlattener{}
err = flattener.FlattenJSON("", jvmStats.JVM)
if err != nil {
return err
}
accumulator.AddFields("logstash_jvm", flattener.Fields, tags)
return nil
}
// gatherProcessStats gather the Process metrics and add results to the accumulator
func (logstash *Logstash) gatherProcessStats(address string, accumulator telegraf.Accumulator) error {
processStats := &processStats{}
err := logstash.gatherJSONData(address, processStats)
if err != nil {
return err
}
tags := map[string]string{
"node_id": processStats.ID,
"node_name": processStats.Name,
"node_version": processStats.Version,
"source": processStats.Host,
}
flattener := parsers_json.JSONFlattener{}
err = flattener.FlattenJSON("", processStats.Process)
if err != nil {
return err
}
accumulator.AddFields("logstash_process", flattener.Fields, tags)
return nil
}
// gatherPluginsStats go through a list of plugins and add their metrics to the accumulator
func gatherPluginsStats(plugins []plugin, pluginType string, tags map[string]string, accumulator telegraf.Accumulator) error {
for _, plugin := range plugins {
pluginTags := map[string]string{
"plugin_name": plugin.Name,
"plugin_id": plugin.ID,
"plugin_type": pluginType,
}
for tag, value := range tags {
pluginTags[tag] = value
}
flattener := parsers_json.JSONFlattener{}
err := flattener.FlattenJSON("", plugin.Events)
if err != nil {
return err
}
accumulator.AddFields("logstash_plugins", flattener.Fields, pluginTags)
if plugin.Failures != nil {
failuresFields := map[string]interface{}{"failures": *plugin.Failures}
accumulator.AddFields("logstash_plugins", failuresFields, pluginTags)
}
/*
The elasticsearch & opensearch output produces additional stats
around bulk requests and document writes (that are elasticsearch
and opensearch specific). Collect those below:
*/
if pluginType == "output" && (plugin.Name == "elasticsearch" || plugin.Name == "opensearch") {
/*
The "bulk_requests" section has details about batch writes
into Elasticsearch
"bulk_requests" : {
"successes" : 2870,
"responses" : {
"200" : 2870
},
"failures": 262,
"with_errors": 9089
},
*/
flattener := parsers_json.JSONFlattener{}
err := flattener.FlattenJSON("", plugin.BulkRequests)
if err != nil {
return err
}
for k, v := range flattener.Fields {
if strings.HasPrefix(k, "bulk_requests") {
continue
}
newKey := "bulk_requests_" + k
flattener.Fields[newKey] = v
delete(flattener.Fields, k)
}
accumulator.AddFields("logstash_plugins", flattener.Fields, pluginTags)
/*
The "documents" section has counts of individual documents
written/retried/etc.
"documents" : {
"successes" : 2665549,
"retryable_failures": 13733
}
*/
flattener = parsers_json.JSONFlattener{}
err = flattener.FlattenJSON("", plugin.Documents)
if err != nil {
return err
}
for k, v := range flattener.Fields {
if strings.HasPrefix(k, "documents") {
continue
}
newKey := "documents_" + k
flattener.Fields[newKey] = v
delete(flattener.Fields, k)
}
accumulator.AddFields("logstash_plugins", flattener.Fields, pluginTags)
}
}
return nil
}
func gatherQueueStats(queue pipelineQueue, tags map[string]string, acc telegraf.Accumulator) error {
queueTags := map[string]string{
"queue_type": queue.Type,
}
for tag, value := range tags {
queueTags[tag] = value
}
events := queue.Events
if queue.EventsCount != nil {
events = *queue.EventsCount
}
queueFields := map[string]interface{}{
"events": events,
}
if queue.Type != "memory" {
flattener := parsers_json.JSONFlattener{}
err := flattener.FlattenJSON("", queue.Capacity)
if err != nil {
return err
}
err = flattener.FlattenJSON("", queue.Data)
if err != nil {
return err
}
for field, value := range flattener.Fields {
queueFields[field] = value
}
if queue.MaxQueueSizeInBytes != nil {
queueFields["max_queue_size_in_bytes"] = *queue.MaxQueueSizeInBytes
}
if queue.QueueSizeInBytes != nil {
queueFields["queue_size_in_bytes"] = *queue.QueueSizeInBytes
}
}
acc.AddFields("logstash_queue", queueFields, queueTags)
return nil
}
// gatherPipelineStats gather the Pipeline metrics and add results to the accumulator (for Logstash < 6)
func (logstash *Logstash) gatherPipelineStats(address string, accumulator telegraf.Accumulator) error {
pipelineStats := &pipelineStats{}
err := logstash.gatherJSONData(address, pipelineStats)
if err != nil {
return err
}
tags := map[string]string{
"node_id": pipelineStats.ID,
"node_name": pipelineStats.Name,
"node_version": pipelineStats.Version,
"source": pipelineStats.Host,
}
flattener := parsers_json.JSONFlattener{}
err = flattener.FlattenJSON("", pipelineStats.Pipeline.Events)
if err != nil {
return err
}
accumulator.AddFields("logstash_events", flattener.Fields, tags)
err = gatherPluginsStats(pipelineStats.Pipeline.Plugins.Inputs, "input", tags, accumulator)
if err != nil {
return err
}
err = gatherPluginsStats(pipelineStats.Pipeline.Plugins.Filters, "filter", tags, accumulator)
if err != nil {
return err
}
err = gatherPluginsStats(pipelineStats.Pipeline.Plugins.Outputs, "output", tags, accumulator)
if err != nil {
return err
}
err = gatherQueueStats(pipelineStats.Pipeline.Queue, tags, accumulator)
if err != nil {
return err
}
return nil
}
// gatherPipelinesStats gather the Pipelines metrics and add results to the accumulator (for Logstash >= 6)
func (logstash *Logstash) gatherPipelinesStats(address string, accumulator telegraf.Accumulator) error {
pipelinesStats := &pipelinesStats{}
err := logstash.gatherJSONData(address, pipelinesStats)
if err != nil {
return err
}
for pipelineName, pipeline := range pipelinesStats.Pipelines {
tags := map[string]string{
"node_id": pipelinesStats.ID,
"node_name": pipelinesStats.Name,
"node_version": pipelinesStats.Version,
"pipeline": pipelineName,
"source": pipelinesStats.Host,
}
flattener := parsers_json.JSONFlattener{}
err := flattener.FlattenJSON("", pipeline.Events)
if err != nil {
return err
}
accumulator.AddFields("logstash_events", flattener.Fields, tags)
err = gatherPluginsStats(pipeline.Plugins.Inputs, "input", tags, accumulator)
if err != nil {
return err
}
err = gatherPluginsStats(pipeline.Plugins.Filters, "filter", tags, accumulator)
if err != nil {
return err
}
err = gatherPluginsStats(pipeline.Plugins.Outputs, "output", tags, accumulator)
if err != nil {
return err
}
err = gatherQueueStats(pipeline.Queue, tags, accumulator)
if err != nil {
return err
}
}
return nil
}
func newLogstash() *Logstash {
return &Logstash{
URL: "http://127.0.0.1:9600",
Collect: []string{"pipelines", "process", "jvm"},
Headers: make(map[string]string),
HTTPClientConfig: common_http.HTTPClientConfig{
Timeout: config.Duration(5 * time.Second),
},
}
}
func init() {
inputs.Add("logstash", func() telegraf.Input {
return newLogstash()
})
}

View file

@ -0,0 +1,830 @@
package logstash
import (
"fmt"
"net"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf/testutil"
)
var logstashTest = newLogstash()
var (
logstash5accPipelineStats testutil.Accumulator
logstash6accPipelinesStats testutil.Accumulator
logstash7accPipelinesStats testutil.Accumulator
logstash5accProcessStats testutil.Accumulator
logstash6accProcessStats testutil.Accumulator
logstash5accJVMStats testutil.Accumulator
logstash6accJVMStats testutil.Accumulator
)
func Test_Logstash5GatherProcessStats(test *testing.T) {
fakeServer := httptest.NewUnstartedServer(http.HandlerFunc(func(writer http.ResponseWriter, _ *http.Request) {
writer.Header().Set("Content-Type", "application/json")
if _, err := fmt.Fprintf(writer, "%s", logstash5ProcessJSON); err != nil {
writer.WriteHeader(http.StatusInternalServerError)
test.Error(err)
return
}
}))
requestURL, err := url.Parse(logstashTest.URL)
require.NoErrorf(test, err, "Can't connect to: %s", logstashTest.URL)
fakeServer.Listener, err = net.Listen("tcp", fmt.Sprintf("%s:%s", requestURL.Hostname(), requestURL.Port()))
require.NoError(test, err)
fakeServer.Start()
defer fakeServer.Close()
if logstashTest.client == nil {
client, err := logstashTest.createHTTPClient()
require.NoError(test, err, "Can't createHTTPClient")
logstashTest.client = client
}
err = logstashTest.gatherProcessStats(logstashTest.URL+processStatsNode, &logstash5accProcessStats)
require.NoError(test, err, "Can't gather Process stats")
logstash5accProcessStats.AssertContainsTaggedFields(
test,
"logstash_process",
map[string]interface{}{
"open_file_descriptors": float64(89.0),
"max_file_descriptors": float64(1.048576e+06),
"cpu_percent": float64(3.0),
"cpu_load_average_5m": float64(0.61),
"cpu_load_average_15m": float64(0.54),
"mem_total_virtual_in_bytes": float64(4.809506816e+09),
"cpu_total_in_millis": float64(1.5526e+11),
"cpu_load_average_1m": float64(0.49),
"peak_open_file_descriptors": float64(100.0),
},
map[string]string{
"node_id": string("a360d8cf-6289-429d-8419-6145e324b574"),
"node_name": string("node-5-test"),
"source": string("node-5"),
"node_version": string("5.3.0"),
},
)
}
func Test_Logstash6GatherProcessStats(test *testing.T) {
fakeServer := httptest.NewUnstartedServer(http.HandlerFunc(func(writer http.ResponseWriter, _ *http.Request) {
writer.Header().Set("Content-Type", "application/json")
if _, err := fmt.Fprintf(writer, "%s", logstash6ProcessJSON); err != nil {
writer.WriteHeader(http.StatusInternalServerError)
test.Error(err)
return
}
}))
requestURL, err := url.Parse(logstashTest.URL)
require.NoErrorf(test, err, "Can't connect to: %s", logstashTest.URL)
fakeServer.Listener, err = net.Listen("tcp", fmt.Sprintf("%s:%s", requestURL.Hostname(), requestURL.Port()))
require.NoError(test, err)
fakeServer.Start()
defer fakeServer.Close()
if logstashTest.client == nil {
client, err := logstashTest.createHTTPClient()
require.NoError(test, err, "Can't createHTTPClient")
logstashTest.client = client
}
err = logstashTest.gatherProcessStats(logstashTest.URL+processStatsNode, &logstash6accProcessStats)
require.NoError(test, err, "Can't gather Process stats")
logstash6accProcessStats.AssertContainsTaggedFields(
test,
"logstash_process",
map[string]interface{}{
"open_file_descriptors": float64(133.0),
"max_file_descriptors": float64(262144.0),
"cpu_percent": float64(0.0),
"cpu_load_average_5m": float64(42.4),
"cpu_load_average_15m": float64(38.95),
"mem_total_virtual_in_bytes": float64(17923452928.0),
"cpu_total_in_millis": float64(5841460),
"cpu_load_average_1m": float64(48.2),
"peak_open_file_descriptors": float64(145.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
},
)
}
func Test_Logstash5GatherPipelineStats(test *testing.T) {
logstash5accPipelineStats.SetDebug(true)
fakeServer := httptest.NewUnstartedServer(http.HandlerFunc(func(writer http.ResponseWriter, _ *http.Request) {
writer.Header().Set("Content-Type", "application/json")
if _, err := fmt.Fprintf(writer, "%s", logstash5PipelineJSON); err != nil {
writer.WriteHeader(http.StatusInternalServerError)
test.Error(err)
return
}
}))
requestURL, err := url.Parse(logstashTest.URL)
require.NoErrorf(test, err, "Can't connect to: %s", logstashTest.URL)
fakeServer.Listener, err = net.Listen("tcp", fmt.Sprintf("%s:%s", requestURL.Hostname(), requestURL.Port()))
require.NoError(test, err)
fakeServer.Start()
defer fakeServer.Close()
if logstashTest.client == nil {
client, err := logstashTest.createHTTPClient()
require.NoError(test, err, "Can't createHTTPClient")
logstashTest.client = client
}
err = logstashTest.gatherPipelineStats(logstashTest.URL+pipelineStatsNode, &logstash5accPipelineStats)
require.NoError(test, err, "Can't gather Pipeline stats")
logstash5accPipelineStats.AssertContainsTaggedFields(
test,
"logstash_events",
map[string]interface{}{
"duration_in_millis": float64(1151.0),
"in": float64(1269.0),
"filtered": float64(1269.0),
"out": float64(1269.0),
},
map[string]string{
"node_id": string("a360d8cf-6289-429d-8419-6145e324b574"),
"node_name": string("node-5-test"),
"source": string("node-5"),
"node_version": string("5.3.0"),
},
)
fields := make(map[string]interface{})
fields["queue_push_duration_in_millis"] = float64(32.0)
fields["out"] = float64(2.0)
logstash5accPipelineStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
fields,
map[string]string{
"node_id": string("a360d8cf-6289-429d-8419-6145e324b574"),
"node_name": string("node-5-test"),
"source": string("node-5"),
"node_version": string("5.3.0"),
"plugin_name": string("beats"),
"plugin_id": string("a35197a509596954e905e38521bae12b1498b17d-1"),
"plugin_type": string("input"),
},
)
logstash5accPipelineStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(360.0),
"in": float64(1269.0),
"out": float64(1269.0),
},
map[string]string{
"node_id": string("a360d8cf-6289-429d-8419-6145e324b574"),
"node_name": string("node-5-test"),
"source": string("node-5"),
"node_version": string("5.3.0"),
"plugin_name": string("stdout"),
"plugin_id": string("582d5c2becb582a053e1e9a6bcc11d49b69a6dfd-2"),
"plugin_type": string("output"),
},
)
logstash5accPipelineStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(228.0),
"in": float64(1269.0),
"out": float64(1269.0),
},
map[string]string{
"node_id": string("a360d8cf-6289-429d-8419-6145e324b574"),
"node_name": string("node-5-test"),
"source": string("node-5"),
"node_version": string("5.3.0"),
"plugin_name": string("s3"),
"plugin_id": string("582d5c2becb582a053e1e9a6bcc11d49b69a6dfd-3"),
"plugin_type": string("output"),
},
)
}
func Test_Logstash6GatherPipelinesStats(test *testing.T) {
logstash6accPipelinesStats.SetDebug(true)
fakeServer := httptest.NewUnstartedServer(http.HandlerFunc(func(writer http.ResponseWriter, _ *http.Request) {
writer.Header().Set("Content-Type", "application/json")
if _, err := fmt.Fprintf(writer, "%s", logstash6PipelinesJSON); err != nil {
writer.WriteHeader(http.StatusInternalServerError)
test.Error(err)
return
}
}))
requestURL, err := url.Parse(logstashTest.URL)
require.NoErrorf(test, err, "Can't connect to: %s", logstashTest.URL)
fakeServer.Listener, err = net.Listen("tcp", fmt.Sprintf("%s:%s", requestURL.Hostname(), requestURL.Port()))
require.NoError(test, err)
fakeServer.Start()
defer fakeServer.Close()
if logstashTest.client == nil {
client, err := logstashTest.createHTTPClient()
require.NoError(test, err, "Can't createHTTPClient")
logstashTest.client = client
}
err = logstashTest.gatherPipelinesStats(logstashTest.URL+pipelineStatsNode, &logstash6accPipelinesStats)
require.NoError(test, err, "Can't gather Pipeline stats")
fields := make(map[string]interface{})
fields["duration_in_millis"] = float64(8540751.0)
fields["queue_push_duration_in_millis"] = float64(366.0)
fields["in"] = float64(180659.0)
fields["filtered"] = float64(180659.0)
fields["out"] = float64(180659.0)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_events",
fields,
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
},
)
fields = make(map[string]interface{})
fields["queue_push_duration_in_millis"] = float64(366.0)
fields["out"] = float64(180659.0)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
fields,
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("kafka"),
"plugin_id": string("input-kafka"),
"plugin_type": string("input"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(2117.0),
"in": float64(27641.0),
"out": float64(27641.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("mutate"),
"plugin_id": string("155b0ad18abbf3df1e0cb7bddef0d77c5ba699efe5a0f8a28502d140549baf54"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(2117.0),
"in": float64(27641.0),
"out": float64(27641.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("mutate"),
"plugin_id": string("155b0ad18abbf3df1e0cb7bddef0d77c5ba699efe5a0f8a28502d140549baf54"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(13149.0),
"in": float64(180659.0),
"out": float64(177549.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("date"),
"plugin_id": string("d079424bb6b7b8c7c61d9c5e0ddae445e92fa9ffa2e8690b0a669f7c690542f0"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"failures": int64(2),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("date"),
"plugin_id": string("d079424bb6b7b8c7c61d9c5e0ddae445e92fa9ffa2e8690b0a669f7c690542f0"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(2814.0),
"in": float64(76602.0),
"out": float64(76602.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("mutate"),
"plugin_id": string("25afa60ab6dc30512fe80efa3493e4928b5b1b109765b7dc46a3e4bbf293d2d4"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(9.0),
"in": float64(934.0),
"out": float64(934.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("mutate"),
"plugin_id": string("2d9fa8f74eeb137bfa703b8050bad7d76636fface729e4585b789b5fc9bed668"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(173.0),
"in": float64(3110.0),
"out": float64(0.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("drop"),
"plugin_id": string("4ed14c9ef0198afe16c31200041e98d321cb5c2e6027e30b077636b8c4842110"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(5605.0),
"in": float64(75482.0),
"out": float64(75482.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("mutate"),
"plugin_id": string("358ce1eb387de7cd5711c2fb4de64cd3b12e5ca9a4c45f529516bcb053a31df4"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(313992.0),
"in": float64(180659.0),
"out": float64(180659.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("csv"),
"plugin_id": string("82a9bbb02fff37a63c257c1f146b0a36273c7cbbebe83c0a51f086e5280bf7bb"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(0.0),
"in": float64(0.0),
"out": float64(0.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("mutate"),
"plugin_id": string("8fb13a8cdd4257b52724d326aa1549603ffdd4e4fde6d20720c96b16238c18c3"),
"plugin_type": string("filter"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(651386.0),
"in": float64(177549.0),
"out": float64(177549.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("elasticsearch"),
"plugin_id": string("output-elk"),
"plugin_type": string("output"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(186751.0),
"in": float64(177549.0),
"out": float64(177549.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("kafka"),
"plugin_id": string("output-kafka1"),
"plugin_type": string("output"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(7335196.0),
"in": float64(177549.0),
"out": float64(177549.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"plugin_name": string("kafka"),
"plugin_id": string("output-kafka2"),
"plugin_type": string("output"),
},
)
logstash6accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_queue",
map[string]interface{}{
"events": float64(103),
"free_space_in_bytes": float64(36307369984),
"max_queue_size_in_bytes": float64(1073741824),
"max_unread_events": float64(0),
"page_capacity_in_bytes": float64(67108864),
"queue_size_in_bytes": float64(1872391),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
"pipeline": string("main"),
"queue_type": string("persisted"),
},
)
}
func Test_Logstash5GatherJVMStats(test *testing.T) {
fakeServer := httptest.NewUnstartedServer(http.HandlerFunc(func(writer http.ResponseWriter, _ *http.Request) {
writer.Header().Set("Content-Type", "application/json")
if _, err := fmt.Fprintf(writer, "%s", logstash5JvmJSON); err != nil {
writer.WriteHeader(http.StatusInternalServerError)
test.Error(err)
return
}
}))
requestURL, err := url.Parse(logstashTest.URL)
require.NoErrorf(test, err, "Can't connect to: %s", logstashTest.URL)
fakeServer.Listener, err = net.Listen("tcp", fmt.Sprintf("%s:%s", requestURL.Hostname(), requestURL.Port()))
require.NoError(test, err)
fakeServer.Start()
defer fakeServer.Close()
if logstashTest.client == nil {
client, err := logstashTest.createHTTPClient()
require.NoError(test, err, "Can't createHTTPClient")
logstashTest.client = client
}
err = logstashTest.gatherJVMStats(logstashTest.URL+jvmStatsNode, &logstash5accJVMStats)
require.NoError(test, err, "Can't gather JVM stats")
logstash5accJVMStats.AssertContainsTaggedFields(
test,
"logstash_jvm",
map[string]interface{}{
"mem_pools_young_max_in_bytes": float64(5.5836672e+08),
"mem_pools_young_committed_in_bytes": float64(1.43261696e+08),
"mem_heap_committed_in_bytes": float64(5.1904512e+08),
"threads_count": float64(29.0),
"mem_pools_old_peak_used_in_bytes": float64(1.27900864e+08),
"mem_pools_old_peak_max_in_bytes": float64(7.2482816e+08),
"mem_heap_used_percent": float64(16.0),
"gc_collectors_young_collection_time_in_millis": float64(3235.0),
"mem_pools_survivor_committed_in_bytes": float64(1.7825792e+07),
"mem_pools_young_used_in_bytes": float64(7.6049384e+07),
"mem_non_heap_committed_in_bytes": float64(2.91487744e+08),
"mem_pools_survivor_peak_max_in_bytes": float64(3.4865152e+07),
"mem_pools_young_peak_max_in_bytes": float64(2.7918336e+08),
"uptime_in_millis": float64(4.803461e+06),
"mem_pools_survivor_peak_used_in_bytes": float64(8.912896e+06),
"mem_pools_survivor_max_in_bytes": float64(6.9730304e+07),
"gc_collectors_old_collection_count": float64(2.0),
"mem_pools_survivor_used_in_bytes": float64(9.419672e+06),
"mem_pools_old_used_in_bytes": float64(2.55801728e+08),
"mem_pools_old_max_in_bytes": float64(1.44965632e+09),
"mem_pools_young_peak_used_in_bytes": float64(7.1630848e+07),
"mem_heap_used_in_bytes": float64(3.41270784e+08),
"mem_heap_max_in_bytes": float64(2.077753344e+09),
"gc_collectors_young_collection_count": float64(616.0),
"threads_peak_count": float64(31.0),
"mem_pools_old_committed_in_bytes": float64(3.57957632e+08),
"gc_collectors_old_collection_time_in_millis": float64(114.0),
"mem_non_heap_used_in_bytes": float64(2.68905936e+08),
},
map[string]string{
"node_id": string("a360d8cf-6289-429d-8419-6145e324b574"),
"node_name": string("node-5-test"),
"source": string("node-5"),
"node_version": string("5.3.0"),
},
)
}
func Test_Logstash6GatherJVMStats(test *testing.T) {
fakeServer := httptest.NewUnstartedServer(http.HandlerFunc(func(writer http.ResponseWriter, _ *http.Request) {
writer.Header().Set("Content-Type", "application/json")
if _, err := fmt.Fprintf(writer, "%s", logstash6JvmJSON); err != nil {
writer.WriteHeader(http.StatusInternalServerError)
test.Error(err)
return
}
}))
requestURL, err := url.Parse(logstashTest.URL)
require.NoErrorf(test, err, "Can't connect to: %s", logstashTest.URL)
fakeServer.Listener, err = net.Listen("tcp", fmt.Sprintf("%s:%s", requestURL.Hostname(), requestURL.Port()))
require.NoError(test, err)
fakeServer.Start()
defer fakeServer.Close()
if logstashTest.client == nil {
client, err := logstashTest.createHTTPClient()
require.NoError(test, err, "Can't createHTTPClient")
logstashTest.client = client
}
err = logstashTest.gatherJVMStats(logstashTest.URL+jvmStatsNode, &logstash6accJVMStats)
require.NoError(test, err, "Can't gather JVM stats")
logstash6accJVMStats.AssertContainsTaggedFields(
test,
"logstash_jvm",
map[string]interface{}{
"mem_pools_young_max_in_bytes": float64(1605304320.0),
"mem_pools_young_committed_in_bytes": float64(71630848.0),
"mem_heap_committed_in_bytes": float64(824963072.0),
"threads_count": float64(60.0),
"mem_pools_old_peak_used_in_bytes": float64(696572600.0),
"mem_pools_old_peak_max_in_bytes": float64(6583418880.0),
"mem_heap_used_percent": float64(2.0),
"gc_collectors_young_collection_time_in_millis": float64(107321.0),
"mem_pools_survivor_committed_in_bytes": float64(8912896.0),
"mem_pools_young_used_in_bytes": float64(11775120.0),
"mem_non_heap_committed_in_bytes": float64(222986240.0),
"mem_pools_survivor_peak_max_in_bytes": float64(200605696),
"mem_pools_young_peak_max_in_bytes": float64(1605304320.0),
"uptime_in_millis": float64(281850926.0),
"mem_pools_survivor_peak_used_in_bytes": float64(8912896.0),
"mem_pools_survivor_max_in_bytes": float64(200605696.0),
"gc_collectors_old_collection_count": float64(37.0),
"mem_pools_survivor_used_in_bytes": float64(835008.0),
"mem_pools_old_used_in_bytes": float64(189750576.0),
"mem_pools_old_max_in_bytes": float64(6583418880.0),
"mem_pools_young_peak_used_in_bytes": float64(71630848.0),
"mem_heap_used_in_bytes": float64(202360704.0),
"mem_heap_max_in_bytes": float64(8389328896.0),
"gc_collectors_young_collection_count": float64(2094.0),
"threads_peak_count": float64(62.0),
"mem_pools_old_committed_in_bytes": float64(744419328.0),
"gc_collectors_old_collection_time_in_millis": float64(7492.0),
"mem_non_heap_used_in_bytes": float64(197878896.0),
},
map[string]string{
"node_id": string("3044f675-21ce-4335-898a-8408aa678245"),
"node_name": string("node-6-test"),
"source": string("node-6"),
"node_version": string("6.4.2"),
},
)
}
func Test_Logstash7GatherPipelinesQueueStats(test *testing.T) {
fakeServer := httptest.NewUnstartedServer(http.HandlerFunc(func(writer http.ResponseWriter, _ *http.Request) {
writer.Header().Set("Content-Type", "application/json")
_, err := fmt.Fprintf(writer, "%s", string(logstash7PipelinesJSON))
if err != nil {
test.Logf("Can't print test json")
}
}))
requestURL, err := url.Parse(logstashTest.URL)
if err != nil {
test.Logf("Can't connect to: %s", logstashTest.URL)
}
fakeServer.Listener, err = net.Listen("tcp", fmt.Sprintf("%s:%s", requestURL.Hostname(), requestURL.Port()))
require.NoError(test, err)
fakeServer.Start()
defer fakeServer.Close()
if logstashTest.client == nil {
client, err := logstashTest.createHTTPClient()
if err != nil {
test.Logf("Can't createHTTPClient")
}
logstashTest.client = client
}
if err := logstashTest.gatherPipelinesStats(logstashTest.URL+pipelineStatsNode, &logstash7accPipelinesStats); err != nil {
test.Logf("Can't gather Pipeline stats")
}
fields := make(map[string]interface{})
fields["duration_in_millis"] = float64(3032875.0)
fields["queue_push_duration_in_millis"] = float64(13300.0)
fields["in"] = float64(2665549.0)
fields["filtered"] = float64(2665549.0)
fields["out"] = float64(2665549.0)
logstash7accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_events",
fields,
map[string]string{
"node_id": string("28580380-ad2c-4032-934b-76359125edca"),
"node_name": string("HOST01.local"),
"source": string("HOST01.local"),
"node_version": string("7.4.2"),
"pipeline": string("infra"),
},
)
logstash7accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"duration_in_millis": float64(2802177.0),
"in": float64(2665549.0),
"out": float64(2665549.0),
},
map[string]string{
"node_id": string("28580380-ad2c-4032-934b-76359125edca"),
"node_name": string("HOST01.local"),
"source": string("HOST01.local"),
"node_version": string("7.4.2"),
"pipeline": string("infra"),
"plugin_name": string("elasticsearch"),
"plugin_id": string("38967f09bbd2647a95aa00702b6b557bdbbab31da6a04f991d38abe5629779e3"),
"plugin_type": string("output"),
},
)
logstash7accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"bulk_requests_successes": float64(2870),
"bulk_requests_responses_200": float64(2870),
"bulk_requests_failures": float64(262),
"bulk_requests_with_errors": float64(9089),
},
map[string]string{
"node_id": string("28580380-ad2c-4032-934b-76359125edca"),
"node_name": string("HOST01.local"),
"source": string("HOST01.local"),
"node_version": string("7.4.2"),
"pipeline": string("infra"),
"plugin_name": string("elasticsearch"),
"plugin_id": string("38967f09bbd2647a95aa00702b6b557bdbbab31da6a04f991d38abe5629779e3"),
"plugin_type": string("output"),
},
)
logstash7accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_plugins",
map[string]interface{}{
"documents_successes": float64(2665549),
"documents_retryable_failures": float64(13733),
},
map[string]string{
"node_id": string("28580380-ad2c-4032-934b-76359125edca"),
"node_name": string("HOST01.local"),
"source": string("HOST01.local"),
"node_version": string("7.4.2"),
"pipeline": string("infra"),
"plugin_name": string("elasticsearch"),
"plugin_id": string("38967f09bbd2647a95aa00702b6b557bdbbab31da6a04f991d38abe5629779e3"),
"plugin_type": string("output"),
},
)
logstash7accPipelinesStats.AssertContainsTaggedFields(
test,
"logstash_queue",
map[string]interface{}{
"events": float64(0),
"max_queue_size_in_bytes": float64(4294967296),
"queue_size_in_bytes": float64(32028566),
},
map[string]string{
"node_id": string("28580380-ad2c-4032-934b-76359125edca"),
"node_name": string("HOST01.local"),
"source": string("HOST01.local"),
"node_version": string("7.4.2"),
"pipeline": string("infra"),
"queue_type": string("persisted"),
},
)
}

View file

@ -0,0 +1,38 @@
# Read metrics exposed by Logstash
[[inputs.logstash]]
## The URL of the exposed Logstash API endpoint.
url = "http://127.0.0.1:9600"
## Use Logstash 5 single pipeline API, set to true when monitoring
## Logstash 5.
# single_pipeline = false
## Enable optional collection components. Can contain
## "pipelines", "process", and "jvm".
# collect = ["pipelines", "process", "jvm"]
## Timeout for HTTP requests.
# timeout = "5s"
## Optional HTTP Basic Auth credentials.
# username = "username"
# password = "pa$$word"
## Optional TLS Config.
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification.
# insecure_skip_verify = false
## If 'use_system_proxy' is set to true, Telegraf will check env vars such as
## HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or their lowercase counterparts).
## If 'use_system_proxy' is set to false (default) and 'http_proxy_url' is
## provided, Telegraf will use the specified URL as HTTP proxy.
# use_system_proxy = false
# http_proxy_url = "http://localhost:8888"
## Optional HTTP headers.
# [inputs.logstash.headers]
# "X-Special-Header" = "Special-Value"

View file

@ -0,0 +1,156 @@
package logstash
const logstash5ProcessJSON = `
{
"host" : "node-5",
"version" : "5.3.0",
"http_address" : "0.0.0.0:9600",
"id" : "a360d8cf-6289-429d-8419-6145e324b574",
"name" : "node-5-test",
"process" : {
"open_file_descriptors" : 89,
"peak_open_file_descriptors" : 100,
"max_file_descriptors" : 1048576,
"mem" : {
"total_virtual_in_bytes" : 4809506816
},
"cpu" : {
"total_in_millis" : 155260000000,
"percent" : 3,
"load_average" : {
"1m" : 0.49,
"5m" : 0.61,
"15m" : 0.54
}
}
}
}
`
const logstash5JvmJSON = `
{
"host" : "node-5",
"version" : "5.3.0",
"http_address" : "0.0.0.0:9600",
"id" : "a360d8cf-6289-429d-8419-6145e324b574",
"name" : "node-5-test",
"jvm" : {
"threads" : {
"count" : 29,
"peak_count" : 31
},
"mem" : {
"heap_used_in_bytes" : 341270784,
"heap_used_percent" : 16,
"heap_committed_in_bytes" : 519045120,
"heap_max_in_bytes" : 2077753344,
"non_heap_used_in_bytes" : 268905936,
"non_heap_committed_in_bytes" : 291487744,
"pools" : {
"survivor" : {
"peak_used_in_bytes" : 8912896,
"used_in_bytes" : 9419672,
"peak_max_in_bytes" : 34865152,
"max_in_bytes" : 69730304,
"committed_in_bytes" : 17825792
},
"old" : {
"peak_used_in_bytes" : 127900864,
"used_in_bytes" : 255801728,
"peak_max_in_bytes" : 724828160,
"max_in_bytes" : 1449656320,
"committed_in_bytes" : 357957632
},
"young" : {
"peak_used_in_bytes" : 71630848,
"used_in_bytes" : 76049384,
"peak_max_in_bytes" : 279183360,
"max_in_bytes" : 558366720,
"committed_in_bytes" : 143261696
}
}
},
"gc" : {
"collectors" : {
"old" : {
"collection_time_in_millis" : 114,
"collection_count" : 2
},
"young" : {
"collection_time_in_millis" : 3235,
"collection_count" : 616
}
}
},
"uptime_in_millis" : 4803461
}
}
`
const logstash5PipelineJSON = `
{
"host" : "node-5",
"version" : "5.3.0",
"http_address" : "0.0.0.0:9600",
"id" : "a360d8cf-6289-429d-8419-6145e324b574",
"name" : "node-5-test",
"pipeline" : {
"events" : {
"duration_in_millis" : 1151,
"in" : 1269,
"filtered" : 1269,
"out" : 1269
},
"plugins" : {
"inputs" : [ {
"id" : "a35197a509596954e905e38521bae12b1498b17d-1",
"events" : {
"out" : 2,
"queue_push_duration_in_millis" : 32
},
"name" : "beats"
} ],
"filters" : [ ],
"outputs" : [ {
"id" : "582d5c2becb582a053e1e9a6bcc11d49b69a6dfd-3",
"events" : {
"duration_in_millis" : 228,
"in" : 1269,
"out" : 1269
},
"name" : "s3"
}, {
"id" : "582d5c2becb582a053e1e9a6bcc11d49b69a6dfd-2",
"events" : {
"duration_in_millis" : 360,
"in" : 1269,
"out" : 1269
},
"name" : "stdout"
} ]
},
"reloads" : {
"last_error" : null,
"successes" : 0,
"last_success_timestamp" : null,
"last_failure_timestamp" : null,
"failures" : 0
},
"queue" : {
"events" : 208,
"type" : "persisted",
"capacity" : {
"page_capacity_in_bytes" : 262144000,
"max_queue_size_in_bytes" : 8589934592,
"max_unread_events" : 0
},
"data" : {
"path" : "/path/to/data/queue",
"free_space_in_bytes" : 89280552960,
"storage_type" : "hfs"
}
},
"id" : "main"
}
}
`

View file

@ -0,0 +1,256 @@
package logstash
const logstash6ProcessJSON = `
{
"host" : "node-6",
"version" : "6.4.2",
"http_address" : "127.0.0.1:9600",
"id" : "3044f675-21ce-4335-898a-8408aa678245",
"name" : "node-6-test",
"process" : {
"open_file_descriptors" : 133,
"peak_open_file_descriptors" : 145,
"max_file_descriptors" : 262144,
"mem" : {
"total_virtual_in_bytes" : 17923452928
},
"cpu" : {
"total_in_millis" : 5841460,
"percent" : 0,
"load_average" : {
"1m" : 48.2,
"5m" : 42.4,
"15m" : 38.95
}
}
}
}
`
const logstash6JvmJSON = `
{
"host" : "node-6",
"version" : "6.4.2",
"http_address" : "127.0.0.1:9600",
"id" : "3044f675-21ce-4335-898a-8408aa678245",
"name" : "node-6-test",
"jvm" : {
"threads" : {
"count" : 60,
"peak_count" : 62
},
"mem" : {
"heap_used_percent" : 2,
"heap_committed_in_bytes" : 824963072,
"heap_max_in_bytes" : 8389328896,
"heap_used_in_bytes" : 202360704,
"non_heap_used_in_bytes" : 197878896,
"non_heap_committed_in_bytes" : 222986240,
"pools" : {
"survivor" : {
"peak_used_in_bytes" : 8912896,
"used_in_bytes" : 835008,
"peak_max_in_bytes" : 200605696,
"max_in_bytes" : 200605696,
"committed_in_bytes" : 8912896
},
"old" : {
"peak_used_in_bytes" : 696572600,
"used_in_bytes" : 189750576,
"peak_max_in_bytes" : 6583418880,
"max_in_bytes" : 6583418880,
"committed_in_bytes" : 744419328
},
"young" : {
"peak_used_in_bytes" : 71630848,
"used_in_bytes" : 11775120,
"peak_max_in_bytes" : 1605304320,
"max_in_bytes" : 1605304320,
"committed_in_bytes" : 71630848
}
}
},
"gc" : {
"collectors" : {
"old" : {
"collection_time_in_millis" : 7492,
"collection_count" : 37
},
"young" : {
"collection_time_in_millis" : 107321,
"collection_count" : 2094
}
}
},
"uptime_in_millis" : 281850926
}
}
`
const logstash6PipelinesJSON = `
{
"host" : "node-6",
"version" : "6.4.2",
"http_address" : "127.0.0.1:9600",
"id" : "3044f675-21ce-4335-898a-8408aa678245",
"name" : "node-6-test",
"pipelines" : {
"main" : {
"events" : {
"duration_in_millis" : 8540751,
"in" : 180659,
"out" : 180659,
"filtered" : 180659,
"queue_push_duration_in_millis" : 366
},
"plugins" : {
"inputs" : [
{
"id" : "input-kafka",
"events" : {
"out" : 180659,
"queue_push_duration_in_millis" : 366
},
"name" : "kafka"
}
],
"filters" : [
{
"id" : "155b0ad18abbf3df1e0cb7bddef0d77c5ba699efe5a0f8a28502d140549baf54",
"events" : {
"duration_in_millis" : 2117,
"in" : 27641,
"out" : 27641
},
"name" : "mutate"
},
{
"id" : "d079424bb6b7b8c7c61d9c5e0ddae445e92fa9ffa2e8690b0a669f7c690542f0",
"events" : {
"duration_in_millis" : 13149,
"in" : 180659,
"out" : 177549
},
"matches" : 177546,
"failures" : 2,
"name" : "date"
},
{
"id" : "25afa60ab6dc30512fe80efa3493e4928b5b1b109765b7dc46a3e4bbf293d2d4",
"events" : {
"duration_in_millis" : 2814,
"in" : 76602,
"out" : 76602
},
"name" : "mutate"
},
{
"id" : "2d9fa8f74eeb137bfa703b8050bad7d76636fface729e4585b789b5fc9bed668",
"events" : {
"duration_in_millis" : 9,
"in" : 934,
"out" : 934
},
"name" : "mutate"
},
{
"id" : "4ed14c9ef0198afe16c31200041e98d321cb5c2e6027e30b077636b8c4842110",
"events" : {
"duration_in_millis" : 173,
"in" : 3110,
"out" : 0
},
"name" : "drop"
},
{
"id" : "358ce1eb387de7cd5711c2fb4de64cd3b12e5ca9a4c45f529516bcb053a31df4",
"events" : {
"duration_in_millis" : 5605,
"in" : 75482,
"out" : 75482
},
"name" : "mutate"
},
{
"id" : "82a9bbb02fff37a63c257c1f146b0a36273c7cbbebe83c0a51f086e5280bf7bb",
"events" : {
"duration_in_millis" : 313992,
"in" : 180659,
"out" : 180659
},
"name" : "csv"
},
{
"id" : "8fb13a8cdd4257b52724d326aa1549603ffdd4e4fde6d20720c96b16238c18c3",
"events" : {
"duration_in_millis" : 0,
"in" : 0,
"out" : 0
},
"name" : "mutate"
}
],
"outputs" : [
{
"id" : "output-elk",
"documents" : {
"successes" : 221
},
"events" : {
"duration_in_millis" : 651386,
"in" : 177549,
"out" : 177549
},
"bulk_requests" : {
"successes" : 1,
"responses" : {
"200" : 748
}
},
"name" : "elasticsearch"
},
{
"id" : "output-kafka1",
"events" : {
"duration_in_millis" : 186751,
"in" : 177549,
"out" : 177549
},
"name" : "kafka"
},
{
"id" : "output-kafka2",
"events" : {
"duration_in_millis" : 7335196,
"in" : 177549,
"out" : 177549
},
"name" : "kafka"
}
]
},
"reloads" : {
"last_error" : null,
"successes" : 0,
"last_success_timestamp" : null,
"last_failure_timestamp" : null,
"failures" : 0
},
"queue": {
"events": 103,
"type": "persisted",
"capacity": {
"queue_size_in_bytes": 1872391,
"page_capacity_in_bytes": 67108864,
"max_queue_size_in_bytes": 1073741824,
"max_unread_events": 0
},
"data": {
"path": "/var/lib/logstash/queue/main",
"free_space_in_bytes": 36307369984,
"storage_type": "ext4"
}
}
}
}
}
`

View file

@ -0,0 +1,140 @@
package logstash
const logstash7PipelinesJSON = `
{
"host" : "HOST01.local",
"version" : "7.4.2",
"http_address" : "127.0.0.1:9600",
"id" : "28580380-ad2c-4032-934b-76359125edca",
"name" : "HOST01.local",
"ephemeral_id" : "bd95ff6b-3fa8-42ae-be32-098a4e4ea1ec",
"status" : "green",
"snapshot" : true,
"pipeline" : {
"workers" : 8,
"batch_size" : 125,
"batch_delay" : 50
},
"pipelines" : {
"infra" : {
"events" : {
"in" : 2665549,
"out" : 2665549,
"duration_in_millis" : 3032875,
"filtered" : 2665549,
"queue_push_duration_in_millis" : 13300
},
"plugins" : {
"inputs" : [ {
"id" : "8526dc80bc2257ab08f96018f96b0c68dd03abc5695bb22fb9e96339a8dfb4f86",
"events" : {
"out" : 2665549,
"queue_push_duration_in_millis" : 13300
},
"peak_connections" : 1,
"name" : "beats",
"current_connections" : 1
} ],
"codecs" : [ {
"id" : "plain_7312c097-1e7f-41db-983b-4f5a87a9eba2",
"encode" : {
"duration_in_millis" : 0,
"writes_in" : 0
},
"name" : "plain",
"decode" : {
"out" : 0,
"duration_in_millis" : 0,
"writes_in" : 0
}
}, {
"id" : "rubydebug_e958e3dc-10f6-4dd6-b7c5-ae3de2892afb",
"encode" : {
"duration_in_millis" : 0,
"writes_in" : 0
},
"name" : "rubydebug",
"decode" : {
"out" : 0,
"duration_in_millis" : 0,
"writes_in" : 0
}
}, {
"id" : "plain_addb97be-fb77-4cbc-b45c-0424cd5d0ac7",
"encode" : {
"duration_in_millis" : 0,
"writes_in" : 0
},
"name" : "plain",
"decode" : {
"out" : 0,
"duration_in_millis" : 0,
"writes_in" : 0
}
} ],
"filters" : [ {
"id" : "9e8297a6ee7b61864f77853317dccde83d29952ef869010c385dcfc9064ab8b8",
"events" : {
"in" : 2665549,
"out" : 2665549,
"duration_in_millis" : 8648
},
"name" : "date",
"matches" : 2665549
}, {
"id" : "bec0c77b3f53a78c7878449c72ec59f97be31c1f12f9621f61ed2d4563bad869",
"events" : {
"in" : 2665549,
"out" : 2665549,
"duration_in_millis" : 195138
},
"name" : "fingerprint"
} ],
"outputs" : [ {
"id" : "df59066a933f038354c1845ba44de692f70dbd0d2009ab07a12b98b776be7e3f",
"events" : {
"in" : 0,
"out" : 0,
"duration_in_millis" : 25
},
"name" : "stdout"
}, {
"id" : "38967f09bbd2647a95aa00702b6b557bdbbab31da6a04f991d38abe5629779e3",
"events" : {
"in" : 2665549,
"out" : 2665549,
"duration_in_millis" : 2802177
},
"name" : "elasticsearch",
"bulk_requests" : {
"successes" : 2870,
"responses" : {
"200" : 2870
},
"failures": 262,
"with_errors": 9089
},
"documents" : {
"successes" : 2665549,
"retryable_failures": 13733
}
} ]
},
"reloads" : {
"successes" : 4,
"last_error" : null,
"failures" : 0,
"last_success_timestamp" : "2020-06-05T08:06:12.538Z",
"last_failure_timestamp" : null
},
"queue" : {
"type" : "persisted",
"events_count" : 0,
"queue_size_in_bytes" : 32028566,
"max_queue_size_in_bytes" : 4294967296
},
"hash" : "5bc589ae4b02cb3e436626429b50928b9d99360639c84dc7fc69268ac01a9fd0",
"ephemeral_id" : "4bcacefa-6cbf-461e-b14e-184edd9ebdf3"
}
}
}`