Adding upstream version 1.34.4.
Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
parent
e393c3af3f
commit
4978089aab
4963 changed files with 677545 additions and 0 deletions
136
plugins/inputs/logparser/README.md
Normal file
136
plugins/inputs/logparser/README.md
Normal file
|
@ -0,0 +1,136 @@
|
|||
# Logparser Input Plugin
|
||||
|
||||
This service plugin streams and parses the given logfiles. Currently it
|
||||
has the capability of parsing "grok" patterns from logfiles, which also supports
|
||||
regex patterns.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> This plugin is deprecated. Please use the [`tail` plugin][tail] plugin in
|
||||
> combination with the [`grok` data format][grok_parser] as a replacement.
|
||||
|
||||
⭐ Telegraf v1.0.0
|
||||
🚩 Telegraf v1.15.0
|
||||
🔥 Telegraf v1.35.0
|
||||
🏷️ system, logging
|
||||
💻 freebsd, linux, macos, windows
|
||||
|
||||
## Migration guide
|
||||
|
||||
This plugin is deprecated since Telegraf v1.15. To replace the plugin please
|
||||
use the [`tail` plugin][tail] plugin in combination with the
|
||||
[`grok` data format][grok_parser].
|
||||
|
||||
Here an example for replacing the existing instance:
|
||||
|
||||
```diff
|
||||
- [[inputs.logparser]]
|
||||
- files = ["/var/log/apache/access.log"]
|
||||
- from_beginning = false
|
||||
- [inputs.logparser.grok]
|
||||
- patterns = ["%{COMBINED_LOG_FORMAT}"]
|
||||
- measurement = "apache_access_log"
|
||||
- custom_pattern_files = []
|
||||
- custom_patterns = '''
|
||||
- '''
|
||||
- timezone = "Canada/Eastern"
|
||||
|
||||
+ [[inputs.tail]]
|
||||
+ files = ["/var/log/apache/access.log"]
|
||||
+ from_beginning = false
|
||||
+ grok_patterns = ["%{COMBINED_LOG_FORMAT}"]
|
||||
+ name_override = "apache_access_log"
|
||||
+ grok_custom_pattern_files = []
|
||||
+ grok_custom_patterns = '''
|
||||
+ '''
|
||||
+ grok_timezone = "Canada/Eastern"
|
||||
+ data_format = "grok"
|
||||
```
|
||||
|
||||
[tail]: /plugins/inputs/tail/README.md
|
||||
[grok_parser]: /plugins/parsers/grok/README.md
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics off Arista LANZ, via socket
|
||||
[[inputs.logparser]]
|
||||
## Log files to parse.
|
||||
## These accept standard unix glob matching rules, but with the addition of
|
||||
## ** as a "super asterisk". ie:
|
||||
## /var/log/**.log -> recursively find all .log files in /var/log
|
||||
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
|
||||
## /var/log/apache.log -> only tail the apache log file
|
||||
files = ["/var/log/apache/access.log"]
|
||||
|
||||
## Read files that currently exist from the beginning. Files that are created
|
||||
## while telegraf is running (and that match the "files" globs) will always
|
||||
## be read from the beginning.
|
||||
from_beginning = false
|
||||
|
||||
## Method used to watch for file updates. Can be either "inotify" or "poll".
|
||||
# watch_method = "inotify"
|
||||
|
||||
## Parse logstash-style "grok" patterns:
|
||||
[inputs.logparser.grok]
|
||||
## This is a list of patterns to check the given log file(s) for.
|
||||
## Note that adding patterns here increases processing time. The most
|
||||
## efficient configuration is to have one pattern per logparser.
|
||||
## Other common built-in patterns are:
|
||||
## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
|
||||
## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
|
||||
patterns = ["%{COMBINED_LOG_FORMAT}"]
|
||||
|
||||
## Name of the outputted measurement name.
|
||||
measurement = "apache_access_log"
|
||||
|
||||
## Full path(s) to custom pattern files.
|
||||
custom_pattern_files = []
|
||||
|
||||
## Custom patterns can also be defined here. Put one pattern per line.
|
||||
custom_patterns = '''
|
||||
'''
|
||||
|
||||
## Timezone allows you to provide an override for timestamps that
|
||||
## don't already include an offset
|
||||
## e.g. 04/06/2016 12:41:45 data one two 5.43µs
|
||||
##
|
||||
## Default: "" which renders UTC
|
||||
## Options are as follows:
|
||||
## 1. Local -- interpret based on machine localtime
|
||||
## 2. "Canada/Eastern" -- Unix TZ values like those found in https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
||||
## 3. UTC -- or blank/unspecified, will return timestamp in UTC
|
||||
# timezone = "Canada/Eastern"
|
||||
|
||||
## When set to "disable", timestamp will not incremented if there is a
|
||||
## duplicate.
|
||||
# unique_timestamp = "auto"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
The plugin accepts arbitrary input and parses it according to the `grok`
|
||||
patterns configured. There is no predefined metric format.
|
||||
|
||||
## Example Output
|
||||
|
||||
There is no predefined metric format, so output depends on plugin input.
|
13
plugins/inputs/logparser/dev/docker-compose.yml
Normal file
13
plugins/inputs/logparser/dev/docker-compose.yml
Normal file
|
@ -0,0 +1,13 @@
|
|||
version: '3'
|
||||
|
||||
services:
|
||||
telegraf:
|
||||
image: glinton/scratch
|
||||
volumes:
|
||||
- ./telegraf.conf:/telegraf.conf
|
||||
- ../../../../telegraf:/telegraf
|
||||
- ./test.log:/var/log/test.log
|
||||
entrypoint:
|
||||
- /telegraf
|
||||
- --config
|
||||
- /telegraf.conf
|
12
plugins/inputs/logparser/dev/telegraf.conf
Normal file
12
plugins/inputs/logparser/dev/telegraf.conf
Normal file
|
@ -0,0 +1,12 @@
|
|||
[agent]
|
||||
interval="1s"
|
||||
flush_interval="1s"
|
||||
|
||||
[[inputs.logparser]]
|
||||
files = ["/var/log/test.log"]
|
||||
from_beginning = true
|
||||
[inputs.logparser.grok]
|
||||
patterns = [ "%{COMBINED_LOG_FORMAT}", "%{CLIENT:client_ip} %{NOTSPACE:ident} %{NOTSPACE:auth} \\[%{TIMESTAMP_ISO8601:timestamp}\\] \"(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})\" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-) %{QS:referrer} %{QS:agent}"]
|
||||
|
||||
[[outputs.file]]
|
||||
files = ["stdout"]
|
2
plugins/inputs/logparser/dev/test.log
Normal file
2
plugins/inputs/logparser/dev/test.log
Normal file
|
@ -0,0 +1,2 @@
|
|||
127.0.0.1 ident auth [10/Oct/2000:13:55:36 -0700] "GET /anything HTTP/1.0" 200 2326 "http://localhost:8083/" "Chrome/51.0.2704.84"
|
||||
127.0.0.1 ident auth [2018-02-21 13:10:34,555] "GET /peter HTTP/1.0" 200 2326 "http://localhost:8083/" "Chrome/51.0.2704.84"
|
307
plugins/inputs/logparser/logparser.go
Normal file
307
plugins/inputs/logparser/logparser.go
Normal file
|
@ -0,0 +1,307 @@
|
|||
//go:generate ../../../tools/readme_config_includer/generator
|
||||
//go:build !solaris
|
||||
|
||||
package logparser
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/influxdata/tail"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/globpath"
|
||||
"github.com/influxdata/telegraf/models"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"github.com/influxdata/telegraf/plugins/parsers/grok"
|
||||
)
|
||||
|
||||
//go:embed sample.conf
|
||||
var sampleConfig string
|
||||
|
||||
var (
|
||||
offsets = make(map[string]int64)
|
||||
offsetsMutex = new(sync.Mutex)
|
||||
)
|
||||
|
||||
const (
|
||||
defaultWatchMethod = "inotify"
|
||||
)
|
||||
|
||||
type LogParser struct {
|
||||
Files []string `toml:"files"`
|
||||
FromBeginning bool `toml:"from_beginning"`
|
||||
WatchMethod string `toml:"watch_method"`
|
||||
GrokConfig grokConfig `toml:"grok"`
|
||||
Log telegraf.Logger `toml:"-"`
|
||||
|
||||
tailers map[string]*tail.Tail
|
||||
offsets map[string]int64
|
||||
lines chan logEntry
|
||||
done chan struct{}
|
||||
wg sync.WaitGroup
|
||||
|
||||
acc telegraf.Accumulator
|
||||
|
||||
sync.Mutex
|
||||
grokParser telegraf.Parser
|
||||
}
|
||||
|
||||
type grokConfig struct {
|
||||
MeasurementName string `toml:"measurement"`
|
||||
Patterns []string
|
||||
NamedPatterns []string
|
||||
CustomPatterns string
|
||||
CustomPatternFiles []string
|
||||
Timezone string
|
||||
UniqueTimestamp string
|
||||
}
|
||||
|
||||
type logEntry struct {
|
||||
path string
|
||||
line string
|
||||
}
|
||||
|
||||
func (*LogParser) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (l *LogParser) Init() error {
|
||||
l.Log.Warnf(`The logparser plugin is deprecated; please use the 'tail' input with the 'grok' data_format`)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LogParser) Start(acc telegraf.Accumulator) error {
|
||||
l.Lock()
|
||||
defer l.Unlock()
|
||||
|
||||
l.acc = acc
|
||||
l.lines = make(chan logEntry, 1000)
|
||||
l.done = make(chan struct{})
|
||||
l.tailers = make(map[string]*tail.Tail)
|
||||
|
||||
mName := "logparser"
|
||||
if l.GrokConfig.MeasurementName != "" {
|
||||
mName = l.GrokConfig.MeasurementName
|
||||
}
|
||||
|
||||
// Looks for fields which implement LogParser interface
|
||||
parser := grok.Parser{
|
||||
Measurement: mName,
|
||||
Patterns: l.GrokConfig.Patterns,
|
||||
NamedPatterns: l.GrokConfig.NamedPatterns,
|
||||
CustomPatterns: l.GrokConfig.CustomPatterns,
|
||||
CustomPatternFiles: l.GrokConfig.CustomPatternFiles,
|
||||
Timezone: l.GrokConfig.Timezone,
|
||||
UniqueTimestamp: l.GrokConfig.UniqueTimestamp,
|
||||
}
|
||||
err := parser.Init()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.grokParser = &parser
|
||||
models.SetLoggerOnPlugin(l.grokParser, l.Log)
|
||||
|
||||
l.wg.Add(1)
|
||||
go l.parser()
|
||||
|
||||
l.tailNewFiles(l.FromBeginning)
|
||||
|
||||
// clear offsets
|
||||
l.offsets = make(map[string]int64)
|
||||
// assumption that once Start is called, all parallel plugins have already been initialized
|
||||
offsetsMutex.Lock()
|
||||
offsets = make(map[string]int64)
|
||||
offsetsMutex.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LogParser) Gather(_ telegraf.Accumulator) error {
|
||||
l.Lock()
|
||||
defer l.Unlock()
|
||||
|
||||
// always start from the beginning of files that appear while we're running
|
||||
l.tailNewFiles(true)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LogParser) Stop() {
|
||||
l.Lock()
|
||||
defer l.Unlock()
|
||||
|
||||
for _, t := range l.tailers {
|
||||
if !l.FromBeginning {
|
||||
// store offset for resume
|
||||
offset, err := t.Tell()
|
||||
if err == nil {
|
||||
l.offsets[t.Filename] = offset
|
||||
l.Log.Debugf("Recording offset %d for file: %v", offset, t.Filename)
|
||||
} else {
|
||||
l.acc.AddError(fmt.Errorf("error recording offset for file %s", t.Filename))
|
||||
}
|
||||
}
|
||||
err := t.Stop()
|
||||
|
||||
// message for a stopped tailer
|
||||
l.Log.Debugf("Tail dropped for file: %v", t.Filename)
|
||||
|
||||
if err != nil {
|
||||
l.Log.Errorf("Error stopping tail on file %s", t.Filename)
|
||||
}
|
||||
}
|
||||
close(l.done)
|
||||
l.wg.Wait()
|
||||
|
||||
// persist offsets
|
||||
offsetsMutex.Lock()
|
||||
for k, v := range l.offsets {
|
||||
offsets[k] = v
|
||||
}
|
||||
offsetsMutex.Unlock()
|
||||
}
|
||||
|
||||
// check the globs against files on disk, and start tailing any new files.
|
||||
// Assumes l's lock is held!
|
||||
func (l *LogParser) tailNewFiles(fromBeginning bool) {
|
||||
var poll bool
|
||||
if l.WatchMethod == "poll" {
|
||||
poll = true
|
||||
}
|
||||
|
||||
// Create a "tailer" for each file
|
||||
for _, filepath := range l.Files {
|
||||
g, err := globpath.Compile(filepath)
|
||||
if err != nil {
|
||||
l.Log.Errorf("Glob %q failed to compile: %s", filepath, err)
|
||||
continue
|
||||
}
|
||||
files := g.Match()
|
||||
|
||||
for _, file := range files {
|
||||
if _, ok := l.tailers[file]; ok {
|
||||
// we're already tailing this file
|
||||
continue
|
||||
}
|
||||
|
||||
var seek *tail.SeekInfo
|
||||
if !fromBeginning {
|
||||
if offset, ok := l.offsets[file]; ok {
|
||||
l.Log.Debugf("Using offset %d for file: %v", offset, file)
|
||||
seek = &tail.SeekInfo{
|
||||
Whence: 0,
|
||||
Offset: offset,
|
||||
}
|
||||
} else {
|
||||
seek = &tail.SeekInfo{
|
||||
Whence: 2,
|
||||
Offset: 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tailer, err := tail.TailFile(file,
|
||||
tail.Config{
|
||||
ReOpen: true,
|
||||
Follow: true,
|
||||
Location: seek,
|
||||
MustExist: true,
|
||||
Poll: poll,
|
||||
Logger: tail.DiscardingLogger,
|
||||
})
|
||||
if err != nil {
|
||||
l.acc.AddError(err)
|
||||
continue
|
||||
}
|
||||
|
||||
l.Log.Debugf("Tail added for file: %v", file)
|
||||
|
||||
// create a goroutine for each "tailer"
|
||||
l.wg.Add(1)
|
||||
go l.receiver(tailer)
|
||||
l.tailers[file] = tailer
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// receiver is launched as a goroutine to continuously watch a tailed logfile
|
||||
// for changes and send any log lines down the l.lines channel.
|
||||
func (l *LogParser) receiver(tailer *tail.Tail) {
|
||||
defer l.wg.Done()
|
||||
|
||||
var line *tail.Line
|
||||
for line = range tailer.Lines {
|
||||
if line.Err != nil {
|
||||
l.Log.Errorf("Error tailing file %s, Error: %s",
|
||||
tailer.Filename, line.Err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Fix up files with Windows line endings.
|
||||
text := strings.TrimRight(line.Text, "\r")
|
||||
|
||||
entry := logEntry{
|
||||
path: tailer.Filename,
|
||||
line: text,
|
||||
}
|
||||
|
||||
select {
|
||||
case <-l.done:
|
||||
case l.lines <- entry:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// parse is launched as a goroutine to watch the l.lines channel.
|
||||
// when a line is available, parse parses it and adds the metric(s) to the
|
||||
// accumulator.
|
||||
func (l *LogParser) parser() {
|
||||
defer l.wg.Done()
|
||||
|
||||
var m telegraf.Metric
|
||||
var err error
|
||||
var entry logEntry
|
||||
for {
|
||||
select {
|
||||
case <-l.done:
|
||||
return
|
||||
case entry = <-l.lines:
|
||||
if entry.line == "" || entry.line == "\n" {
|
||||
continue
|
||||
}
|
||||
}
|
||||
m, err = l.grokParser.ParseLine(entry.line)
|
||||
if err == nil {
|
||||
if m != nil {
|
||||
tags := m.Tags()
|
||||
tags["path"] = entry.path
|
||||
l.acc.AddFields(m.Name(), m.Fields(), tags, m.Time())
|
||||
}
|
||||
} else {
|
||||
l.Log.Errorf("Error parsing log line: %s", err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func newLogParser() *LogParser {
|
||||
offsetsMutex.Lock()
|
||||
offsetsCopy := make(map[string]int64, len(offsets))
|
||||
for k, v := range offsets {
|
||||
offsetsCopy[k] = v
|
||||
}
|
||||
offsetsMutex.Unlock()
|
||||
|
||||
return &LogParser{
|
||||
WatchMethod: defaultWatchMethod,
|
||||
offsets: offsetsCopy,
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("logparser", func() telegraf.Input {
|
||||
return newLogParser()
|
||||
})
|
||||
}
|
3
plugins/inputs/logparser/logparser_solaris.go
Normal file
3
plugins/inputs/logparser/logparser_solaris.go
Normal file
|
@ -0,0 +1,3 @@
|
|||
//go:build solaris
|
||||
|
||||
package logparser
|
228
plugins/inputs/logparser/logparser_test.go
Normal file
228
plugins/inputs/logparser/logparser_test.go
Normal file
|
@ -0,0 +1,228 @@
|
|||
package logparser
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
)
|
||||
|
||||
var (
|
||||
testdataDir = getTestdataDir()
|
||||
)
|
||||
|
||||
func TestStartNoParsers(t *testing.T) {
|
||||
logparser := &LogParser{
|
||||
Log: testutil.Logger{},
|
||||
FromBeginning: true,
|
||||
Files: []string{filepath.Join(testdataDir, "*.log")},
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
require.NoError(t, logparser.Start(&acc))
|
||||
}
|
||||
|
||||
func TestGrokParseLogFilesNonExistPattern(t *testing.T) {
|
||||
logparser := &LogParser{
|
||||
Log: testutil.Logger{},
|
||||
FromBeginning: true,
|
||||
Files: []string{filepath.Join(testdataDir, "*.log")},
|
||||
GrokConfig: grokConfig{
|
||||
Patterns: []string{"%{FOOBAR}"},
|
||||
CustomPatternFiles: []string{filepath.Join(testdataDir, "test-patterns")},
|
||||
},
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
err := logparser.Start(&acc)
|
||||
require.Error(t, err)
|
||||
}
|
||||
|
||||
func TestGrokParseLogFiles(t *testing.T) {
|
||||
logparser := &LogParser{
|
||||
Log: testutil.Logger{},
|
||||
GrokConfig: grokConfig{
|
||||
MeasurementName: "logparser_grok",
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}", "%{TEST_LOG_C}"},
|
||||
CustomPatternFiles: []string{filepath.Join(testdataDir, "test-patterns")},
|
||||
},
|
||||
FromBeginning: true,
|
||||
Files: []string{filepath.Join(testdataDir, "*.log")},
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
require.NoError(t, logparser.Start(&acc))
|
||||
acc.Wait(3)
|
||||
|
||||
logparser.Stop()
|
||||
|
||||
expected := []telegraf.Metric{
|
||||
testutil.MustMetric(
|
||||
"logparser_grok",
|
||||
map[string]string{
|
||||
"response_code": "200",
|
||||
"path": filepath.Join(testdataDir, "test_a.log"),
|
||||
},
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
"myint": int64(101),
|
||||
},
|
||||
time.Unix(0, 0),
|
||||
),
|
||||
testutil.MustMetric(
|
||||
"logparser_grok",
|
||||
map[string]string{
|
||||
"path": filepath.Join(testdataDir, "test_b.log"),
|
||||
},
|
||||
map[string]interface{}{
|
||||
"myfloat": 1.25,
|
||||
"mystring": "mystring",
|
||||
"nomodifier": "nomodifier",
|
||||
},
|
||||
time.Unix(0, 0),
|
||||
),
|
||||
testutil.MustMetric(
|
||||
"logparser_grok",
|
||||
map[string]string{
|
||||
"path": filepath.Join(testdataDir, "test_c.log"),
|
||||
"response_code": "200",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": 1.25,
|
||||
"myint": 101,
|
||||
"response_time": 5432,
|
||||
},
|
||||
time.Unix(0, 0),
|
||||
),
|
||||
}
|
||||
|
||||
testutil.RequireMetricsEqual(t, expected, acc.GetTelegrafMetrics(),
|
||||
testutil.IgnoreTime(), testutil.SortMetrics())
|
||||
}
|
||||
|
||||
func TestGrokParseLogFilesAppearLater(t *testing.T) {
|
||||
emptydir := t.TempDir()
|
||||
|
||||
logparser := &LogParser{
|
||||
Log: testutil.Logger{},
|
||||
FromBeginning: true,
|
||||
Files: []string{filepath.Join(emptydir, "*.log")},
|
||||
GrokConfig: grokConfig{
|
||||
MeasurementName: "logparser_grok",
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatternFiles: []string{filepath.Join(testdataDir, "test-patterns")},
|
||||
},
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
require.NoError(t, logparser.Start(&acc))
|
||||
|
||||
require.Equal(t, 0, acc.NFields())
|
||||
|
||||
input, err := os.ReadFile(filepath.Join(testdataDir, "test_a.log"))
|
||||
require.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(filepath.Join(emptydir, "test_a.log"), input, 0640)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, acc.GatherError(logparser.Gather))
|
||||
acc.Wait(1)
|
||||
|
||||
logparser.Stop()
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "logparser_grok",
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
"myint": int64(101),
|
||||
},
|
||||
map[string]string{
|
||||
"response_code": "200",
|
||||
"path": filepath.Join(emptydir, "test_a.log"),
|
||||
})
|
||||
}
|
||||
|
||||
// Test that test_a.log line gets parsed even though we don't have the correct
|
||||
// pattern available for test_b.log
|
||||
func TestGrokParseLogFilesOneBad(t *testing.T) {
|
||||
logparser := &LogParser{
|
||||
Log: testutil.Logger{},
|
||||
FromBeginning: true,
|
||||
Files: []string{filepath.Join(testdataDir, "test_a.log")},
|
||||
GrokConfig: grokConfig{
|
||||
MeasurementName: "logparser_grok",
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_BAD}"},
|
||||
CustomPatternFiles: []string{filepath.Join(testdataDir, "test-patterns")},
|
||||
},
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
acc.SetDebug(true)
|
||||
require.NoError(t, logparser.Start(&acc))
|
||||
|
||||
acc.Wait(1)
|
||||
logparser.Stop()
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "logparser_grok",
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
"myint": int64(101),
|
||||
},
|
||||
map[string]string{
|
||||
"response_code": "200",
|
||||
"path": filepath.Join(testdataDir, "test_a.log"),
|
||||
})
|
||||
}
|
||||
|
||||
func TestGrokParseLogFiles_TimestampInEpochMilli(t *testing.T) {
|
||||
logparser := &LogParser{
|
||||
Log: testutil.Logger{},
|
||||
GrokConfig: grokConfig{
|
||||
MeasurementName: "logparser_grok",
|
||||
Patterns: []string{"%{TEST_LOG_C}"},
|
||||
CustomPatternFiles: []string{filepath.Join(testdataDir, "test-patterns")},
|
||||
},
|
||||
FromBeginning: true,
|
||||
Files: []string{filepath.Join(testdataDir, "test_c.log")},
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
acc.SetDebug(true)
|
||||
require.NoError(t, logparser.Start(&acc))
|
||||
acc.Wait(1)
|
||||
|
||||
logparser.Stop()
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "logparser_grok",
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
"myint": int64(101),
|
||||
},
|
||||
map[string]string{
|
||||
"response_code": "200",
|
||||
"path": filepath.Join(testdataDir, "test_c.log"),
|
||||
})
|
||||
}
|
||||
|
||||
func getTestdataDir() string {
|
||||
dir, err := os.Getwd()
|
||||
if err != nil {
|
||||
// if we cannot even establish the test directory, further progress is meaningless
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return filepath.Join(dir, "testdata")
|
||||
}
|
52
plugins/inputs/logparser/sample.conf
Normal file
52
plugins/inputs/logparser/sample.conf
Normal file
|
@ -0,0 +1,52 @@
|
|||
# Read metrics off Arista LANZ, via socket
|
||||
[[inputs.logparser]]
|
||||
## Log files to parse.
|
||||
## These accept standard unix glob matching rules, but with the addition of
|
||||
## ** as a "super asterisk". ie:
|
||||
## /var/log/**.log -> recursively find all .log files in /var/log
|
||||
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
|
||||
## /var/log/apache.log -> only tail the apache log file
|
||||
files = ["/var/log/apache/access.log"]
|
||||
|
||||
## Read files that currently exist from the beginning. Files that are created
|
||||
## while telegraf is running (and that match the "files" globs) will always
|
||||
## be read from the beginning.
|
||||
from_beginning = false
|
||||
|
||||
## Method used to watch for file updates. Can be either "inotify" or "poll".
|
||||
# watch_method = "inotify"
|
||||
|
||||
## Parse logstash-style "grok" patterns:
|
||||
[inputs.logparser.grok]
|
||||
## This is a list of patterns to check the given log file(s) for.
|
||||
## Note that adding patterns here increases processing time. The most
|
||||
## efficient configuration is to have one pattern per logparser.
|
||||
## Other common built-in patterns are:
|
||||
## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
|
||||
## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
|
||||
patterns = ["%{COMBINED_LOG_FORMAT}"]
|
||||
|
||||
## Name of the outputted measurement name.
|
||||
measurement = "apache_access_log"
|
||||
|
||||
## Full path(s) to custom pattern files.
|
||||
custom_pattern_files = []
|
||||
|
||||
## Custom patterns can also be defined here. Put one pattern per line.
|
||||
custom_patterns = '''
|
||||
'''
|
||||
|
||||
## Timezone allows you to provide an override for timestamps that
|
||||
## don't already include an offset
|
||||
## e.g. 04/06/2016 12:41:45 data one two 5.43µs
|
||||
##
|
||||
## Default: "" which renders UTC
|
||||
## Options are as follows:
|
||||
## 1. Local -- interpret based on machine localtime
|
||||
## 2. "Canada/Eastern" -- Unix TZ values like those found in https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
||||
## 3. UTC -- or blank/unspecified, will return timestamp in UTC
|
||||
# timezone = "Canada/Eastern"
|
||||
|
||||
## When set to "disable", timestamp will not incremented if there is a
|
||||
## duplicate.
|
||||
# unique_timestamp = "auto"
|
18
plugins/inputs/logparser/testdata/test-patterns
vendored
Normal file
18
plugins/inputs/logparser/testdata/test-patterns
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
# Test A log line:
|
||||
# [04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time:duration}
|
||||
TEST_LOG_A \[%{HTTPDATE:timestamp:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME} %{NUMBER:myint:int}
|
||||
|
||||
# Test B log line:
|
||||
# [04/06/2016--12:41:45] 1.25 mystring dropme nomodifier
|
||||
TEST_TIMESTAMP %{MONTHDAY}/%{MONTHNUM}/%{YEAR}--%{TIME}
|
||||
TEST_LOG_B \[%{TEST_TIMESTAMP:timestamp:ts-"02/01/2006--15:04:05"}\] %{NUMBER:myfloat:float} %{WORD:mystring:string} %{WORD:dropme:drop} %{WORD:nomodifier}
|
||||
|
||||
TEST_TIMESTAMP %{MONTHDAY}/%{MONTHNUM}/%{YEAR}--%{TIME}
|
||||
TEST_LOG_BAD \[%{TEST_TIMESTAMP:timestamp:ts-"02/01/2006--15:04:05"}\] %{NUMBER:myfloat:float} %{WORD:mystring:int} %{WORD:dropme:drop} %{WORD:nomodifier}
|
||||
|
||||
# Test C log line:
|
||||
# 1568723594631 1.25 200 192.168.1.1 5.432µs 101
|
||||
TEST_LOG_C %{POSINT:timestamp:ts-epochmilli} %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME} %{NUMBER:myint:int}
|
1
plugins/inputs/logparser/testdata/test_a.log
vendored
Normal file
1
plugins/inputs/logparser/testdata/test_a.log
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101
|
1
plugins/inputs/logparser/testdata/test_b.log
vendored
Normal file
1
plugins/inputs/logparser/testdata/test_b.log
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
[04/06/2016--12:41:45] 1.25 mystring dropme nomodifier
|
1
plugins/inputs/logparser/testdata/test_c.log
vendored
Normal file
1
plugins/inputs/logparser/testdata/test_c.log
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
1568723594631 1.25 200 192.168.1.1 5.432µs 101
|
Loading…
Add table
Add a link
Reference in a new issue