1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,309 @@
# JSON
The `json` output data format converts metrics into JSON documents.
## Configuration
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "json"
## The resolution to use for the metric timestamp. Must be a duration string
## such as "1ns", "1us", "1ms", "10ms", "1s". Durations are truncated to
## the power of 10 less than the specified units.
json_timestamp_units = "1s"
## The default timestamp format is Unix epoch time, subject to the
# resolution configured in json_timestamp_units.
# Other timestamp layout can be configured using the Go language time
# layout specification from https://golang.org/pkg/time/#Time.Format
# e.g.: json_timestamp_format = "2006-01-02T15:04:05Z07:00"
#json_timestamp_format = ""
## A [JSONata](https://jsonata.org/) transformation of the JSON in
## [standard-form](#examples). Please note that only version 1.5.4 of the
## JSONata is supported due to the underlying library used.
## This allows to generate an arbitrary output form based on the metric(s). Please use
## multiline strings (starting and ending with three single-quotes) if needed.
#json_transformation = ""
## Filter for fields that contain nested JSON data.
## The serializer will try to decode matching STRING fields containing
## valid JSON. This is done BEFORE any JSON transformation. The filters
## can contain wildcards.
#json_nested_fields_include = []
#json_nested_fields_exclude = []
```
## Examples
Standard form:
```json
{
"fields": {
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
},
"name": "docker",
"tags": {
"host": "raynor"
},
"timestamp": 1458229140
}
```
When an output plugin needs to emit multiple metrics at one time, it may use
the batch format. The use of batch format is determined by the plugin,
reference the documentation for the specific plugin.
```json
{
"metrics": [
{
"fields": {
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
},
"name": "docker",
"tags": {
"host": "raynor"
},
"timestamp": 1458229140
},
{
"fields": {
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
},
"name": "docker",
"tags": {
"host": "raynor"
},
"timestamp": 1458229140
}
]
}
```
## Transformations
Transformations using the [JSONata standard](https://jsonata.org/) can be specified with
the `json_transformation` parameter. The input to the transformation is the serialized
metric in the standard-form above.
**Note**: There is a difference in batch and non-batch serialization mode!
The former adds a `metrics` field containing the metric array, while the later
serializes the metric directly.
**Note**: Please note that the JSONata support is limited to version 1.5.4 due
to the underlying library used by Telegraf. When using the online playground
below ensure that you have selected 1.5.4 when trying examples or building
transformations.
In the following sections, some rudimentary examples for transformations are shown.
For more elaborated JSONata expressions please consult the
[documentation](https://docs.jsonata.org) or the
[online playground](https://try.jsonata.org).
### Non-batch mode
In the following examples, we will use the following input to the transformation:
```json
{
"fields": {
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
},
"name": "docker",
"tags": {
"host": "raynor"
},
"timestamp": 1458229140
}
```
If you want to flatten the above metric, you can use
```json
$merge([{"name": name, "timestamp": timestamp}, tags, fields])
```
to get
```json
{
"name": "docker",
"timestamp": 1458229140,
"host": "raynor",
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
}
```
It is also possible to do arithmetic or renaming
```json
{
"capacity": $sum($sift($.fields,function($value,$key){$key~>/^field_/}).*),
"images": fields.n_images,
"host": tags.host,
"time": $fromMillis(timestamp*1000)
}
```
will result in
```json
{
"capacity": 93,
"images": 660,
"host": "raynor",
"time": "2016-03-17T15:39:00.000Z"
}
```
### Batch mode
When an output plugin emits multiple metrics in a batch fashion it might be useful
to restructure and/or combine the metric elements. We will use the following input
example in this section
```json
{
"metrics": [
{
"fields": {
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
},
"name": "docker",
"tags": {
"host": "raynor"
},
"timestamp": 1458229140
},
{
"fields": {
"field_1": 12,
"field_2": 43,
"field_3": 0,
"field_4": 5,
"field_5": 7,
"field_N": 27,
"n_images": 72
},
"name": "docker",
"tags": {
"host": "amaranth"
},
"timestamp": 1458229140
},
{
"fields": {
"field_1": 5,
"field_N": 34,
"n_images": 0
},
"name": "storage",
"tags": {
"host": "amaranth"
},
"timestamp": 1458229140
}
]
}
```
We can do the same computation as above, iterating over the metrics
```json
metrics.{
"capacity": $sum($sift($.fields,function($value,$key){$key~>/^field_/}).*),
"images": fields.n_images,
"service": (name & "(" & tags.host & ")"),
"time": $fromMillis(timestamp*1000)
}
```
resulting in
```json
[
{
"capacity": 93,
"images": 660,
"service": "docker(raynor)",
"time": "2016-03-17T15:39:00.000Z"
},
{
"capacity": 94,
"images": 72,
"service": "docker(amaranth)",
"time": "2016-03-17T15:39:00.000Z"
},
{
"capacity": 39,
"images": 0,
"service": "storage(amaranth)",
"time": "2016-03-17T15:39:00.000Z"
}
]
```
However, the more interesting use-case is to restructure and **combine** the metrics, e.g. by grouping by `host`
```json
{
"time": $min(metrics.timestamp) * 1000 ~> $fromMillis(),
"images": metrics{
tags.host: {
name: fields.n_images
}
},
"capacity alerts": metrics[fields.n_images < 10].[(tags.host & " " & name)]
}
```
resulting in
```json
{
"time": "2016-03-17T15:39:00.000Z",
"images": {
"raynor": {
"docker": 660
},
"amaranth": {
"docker": 72,
"storage": 0
}
},
"capacity alerts": [
"amaranth storage"
]
}
```
Please consult the JSONata documentation for more examples and details.

View file

@ -0,0 +1,169 @@
package json
import (
"encoding/json"
"errors"
"fmt"
"math"
"time"
"github.com/blues/jsonata-go"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/filter"
"github.com/influxdata/telegraf/plugins/serializers"
)
type Serializer struct {
TimestampUnits config.Duration `toml:"json_timestamp_units"`
TimestampFormat string `toml:"json_timestamp_format"`
Transformation string `toml:"json_transformation"`
NestedFieldsInclude []string `toml:"json_nested_fields_include"`
NestedFieldsExclude []string `toml:"json_nested_fields_exclude"`
nestedfields filter.Filter
}
func (s *Serializer) Init() error {
// Default precision is 1s
if s.TimestampUnits <= 0 {
s.TimestampUnits = config.Duration(time.Second)
}
// Search for the power of ten less than the duration
d := time.Nanosecond
t := time.Duration(s.TimestampUnits)
for {
if d*10 > t {
t = d
break
}
d = d * 10
}
s.TimestampUnits = config.Duration(t)
if len(s.NestedFieldsInclude) > 0 || len(s.NestedFieldsExclude) > 0 {
f, err := filter.NewIncludeExcludeFilter(s.NestedFieldsInclude, s.NestedFieldsExclude)
if err != nil {
return err
}
s.nestedfields = f
}
return nil
}
func (s *Serializer) Serialize(metric telegraf.Metric) ([]byte, error) {
var obj interface{}
obj = s.createObject(metric)
if s.Transformation != "" {
var err error
if obj, err = s.transform(obj); err != nil {
if errors.Is(err, jsonata.ErrUndefined) {
return nil, fmt.Errorf("%w (maybe configured for batch mode?)", err)
}
return nil, err
}
}
serialized, err := json.Marshal(obj)
if err != nil {
return nil, err
}
serialized = append(serialized, '\n')
return serialized, nil
}
func (s *Serializer) SerializeBatch(metrics []telegraf.Metric) ([]byte, error) {
objects := make([]interface{}, 0, len(metrics))
for _, metric := range metrics {
m := s.createObject(metric)
objects = append(objects, m)
}
var obj interface{}
obj = map[string]interface{}{
"metrics": objects,
}
if s.Transformation != "" {
var err error
if obj, err = s.transform(obj); err != nil {
if errors.Is(err, jsonata.ErrUndefined) {
return nil, fmt.Errorf("%w (maybe configured for non-batch mode?)", err)
}
return nil, err
}
}
serialized, err := json.Marshal(obj)
if err != nil {
return nil, err
}
serialized = append(serialized, '\n')
return serialized, nil
}
func (s *Serializer) createObject(metric telegraf.Metric) map[string]interface{} {
m := make(map[string]interface{}, 4)
tags := make(map[string]string, len(metric.TagList()))
for _, tag := range metric.TagList() {
tags[tag.Key] = tag.Value
}
m["tags"] = tags
fields := make(map[string]interface{}, len(metric.FieldList()))
for _, field := range metric.FieldList() {
val := field.Value
switch fv := field.Value.(type) {
case float64:
// JSON does not support these special values
if math.IsNaN(fv) || math.IsInf(fv, 0) {
continue
}
case string:
// Check for nested fields if any
if s.nestedfields != nil && s.nestedfields.Match(field.Key) {
bv := []byte(fv)
if json.Valid(bv) {
var nested interface{}
if err := json.Unmarshal(bv, &nested); err == nil {
val = nested
}
}
}
}
fields[field.Key] = val
}
m["fields"] = fields
m["name"] = metric.Name()
if s.TimestampFormat == "" {
m["timestamp"] = metric.Time().UnixNano() / int64(s.TimestampUnits)
} else {
m["timestamp"] = metric.Time().UTC().Format(s.TimestampFormat)
}
return m
}
func (s *Serializer) transform(obj interface{}) (interface{}, error) {
transformation, err := jsonata.Compile(s.Transformation)
if err != nil {
return nil, err
}
return transformation.Eval(obj)
}
func init() {
serializers.Add("json",
func() telegraf.Serializer {
return &Serializer{}
},
)
}

View file

@ -0,0 +1,514 @@
package json
import (
"encoding/json"
"fmt"
"math"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/influxdata/toml"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/plugins/parsers/influx"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil"
)
func TestSerializeMetricFloat(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": float64(91.5),
}
m := metric.New("cpu", tags, fields, now)
s := Serializer{}
require.NoError(t, s.Init())
buf, err := s.Serialize(m)
require.NoError(t, err)
expS := []byte(fmt.Sprintf(`{"fields":{"usage_idle":91.5},"name":"cpu","tags":{"cpu":"cpu0"},"timestamp":%d}`, now.Unix()) + "\n")
require.Equal(t, string(expS), string(buf))
}
func TestSerialize_TimestampUnits(t *testing.T) {
tests := []struct {
name string
timestampUnits time.Duration
timestampFormat string
expected string
}{
{
name: "default of 1s",
timestampUnits: 0,
expected: `{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":1525478795}`,
},
{
name: "1ns",
timestampUnits: 1 * time.Nanosecond,
expected: `{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":1525478795123456789}`,
},
{
name: "1ms",
timestampUnits: 1 * time.Millisecond,
expected: `{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":1525478795123}`,
},
{
name: "10ms",
timestampUnits: 10 * time.Millisecond,
expected: `{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":152547879512}`,
},
{
name: "15ms is reduced to 10ms",
timestampUnits: 15 * time.Millisecond,
expected: `{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":152547879512}`,
},
{
name: "65ms is reduced to 10ms",
timestampUnits: 65 * time.Millisecond,
expected: `{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":152547879512}`,
},
{
name: "timestamp format",
timestampFormat: "2006-01-02T15:04:05Z07:00",
expected: `{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":"2018-05-05T00:06:35Z"}`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := metric.New(
"cpu",
map[string]string{},
map[string]interface{}{
"value": 42.0,
},
time.Unix(1525478795, 123456789),
)
s := Serializer{
TimestampUnits: config.Duration(tt.timestampUnits),
TimestampFormat: tt.timestampFormat,
}
require.NoError(t, s.Init())
actual, err := s.Serialize(m)
require.NoError(t, err)
require.Equal(t, tt.expected+"\n", string(actual))
})
}
}
func TestSerializeMetricInt(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": int64(90),
}
m := metric.New("cpu", tags, fields, now)
s := Serializer{}
require.NoError(t, s.Init())
buf, err := s.Serialize(m)
require.NoError(t, err)
expS := []byte(fmt.Sprintf(`{"fields":{"usage_idle":90},"name":"cpu","tags":{"cpu":"cpu0"},"timestamp":%d}`, now.Unix()) + "\n")
require.Equal(t, string(expS), string(buf))
}
func TestSerializeMetricString(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": "foobar",
}
m := metric.New("cpu", tags, fields, now)
s := Serializer{}
require.NoError(t, s.Init())
buf, err := s.Serialize(m)
require.NoError(t, err)
expS := []byte(fmt.Sprintf(`{"fields":{"usage_idle":"foobar"},"name":"cpu","tags":{"cpu":"cpu0"},"timestamp":%d}`, now.Unix()) + "\n")
require.Equal(t, string(expS), string(buf))
}
func TestSerializeMultiFields(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": int64(90),
"usage_total": 8559615,
}
m := metric.New("cpu", tags, fields, now)
s := Serializer{}
require.NoError(t, s.Init())
buf, err := s.Serialize(m)
require.NoError(t, err)
expS := []byte(fmt.Sprintf(`{"fields":{"usage_idle":90,"usage_total":8559615},"name":"cpu","tags":{"cpu":"cpu0"},"timestamp":%d}`, now.Unix()) + "\n")
require.Equal(t, string(expS), string(buf))
}
func TestSerializeMetricWithEscapes(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu tag": "cpu0",
}
fields := map[string]interface{}{
"U,age=Idle": int64(90),
}
m := metric.New("My CPU", tags, fields, now)
s := Serializer{}
require.NoError(t, s.Init())
buf, err := s.Serialize(m)
require.NoError(t, err)
expS := []byte(fmt.Sprintf(`{"fields":{"U,age=Idle":90},"name":"My CPU","tags":{"cpu tag":"cpu0"},"timestamp":%d}`, now.Unix()) + "\n")
require.Equal(t, string(expS), string(buf))
}
func TestSerializeBatch(t *testing.T) {
m := metric.New(
"cpu",
map[string]string{},
map[string]interface{}{
"value": 42.0,
},
time.Unix(0, 0),
)
metrics := []telegraf.Metric{m, m}
s := Serializer{}
require.NoError(t, s.Init())
buf, err := s.SerializeBatch(metrics)
require.NoError(t, err)
require.JSONEq(
t,
`{"metrics":[{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":0},{"fields":{"value":42},"name":"cpu","tags":{},"timestamp":0}]}`,
string(buf),
)
}
func TestSerializeBatchSkipInf(t *testing.T) {
metrics := []telegraf.Metric{
testutil.MustMetric(
"cpu",
map[string]string{},
map[string]interface{}{
"inf": math.Inf(1),
"time_idle": 42,
},
time.Unix(0, 0),
),
}
s := Serializer{}
require.NoError(t, s.Init())
buf, err := s.SerializeBatch(metrics)
require.NoError(t, err)
require.JSONEq(t, `{"metrics":[{"fields":{"time_idle":42},"name":"cpu","tags":{},"timestamp":0}]}`, string(buf))
}
func TestSerializeBatchSkipInfAllFields(t *testing.T) {
metrics := []telegraf.Metric{
testutil.MustMetric(
"cpu",
map[string]string{},
map[string]interface{}{
"inf": math.Inf(1),
},
time.Unix(0, 0),
),
}
s := Serializer{}
require.NoError(t, s.Init())
buf, err := s.SerializeBatch(metrics)
require.NoError(t, err)
require.JSONEq(t, `{"metrics":[{"fields":{},"name":"cpu","tags":{},"timestamp":0}]}`, string(buf))
}
func TestSerializeTransformationNonBatch(t *testing.T) {
var tests = []struct {
name string
filename string
}{
{
name: "non-batch transformation test",
filename: "testcases/transformation_single.conf",
},
}
parser := &influx.Parser{}
require.NoError(t, parser.Init())
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
filename := filepath.FromSlash(tt.filename)
cfg, header, err := loadTestConfiguration(filename)
require.NoError(t, err)
// Get the input metrics
metrics, err := testutil.ParseMetricsFrom(header, "Input:", parser)
require.NoError(t, err)
// Get the expectations
expectedArray, err := loadJSON(strings.TrimSuffix(filename, ".conf") + "_out.json")
require.NoError(t, err)
expected := expectedArray.([]interface{})
// Serialize
serializer := Serializer{
TimestampUnits: config.Duration(cfg.TimestampUnits),
TimestampFormat: cfg.TimestampFormat,
Transformation: cfg.Transformation,
}
require.NoError(t, serializer.Init())
for i, m := range metrics {
buf, err := serializer.Serialize(m)
require.NoError(t, err)
// Compare
var actual interface{}
require.NoError(t, json.Unmarshal(buf, &actual))
require.EqualValuesf(t, expected[i], actual, "mismatch in %d", i)
}
})
}
}
func TestSerializeTransformationBatch(t *testing.T) {
var tests = []struct {
name string
filename string
}{
{
name: "batch transformation test",
filename: "testcases/transformation_batch.conf",
},
}
parser := &influx.Parser{}
require.NoError(t, parser.Init())
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
filename := filepath.FromSlash(tt.filename)
cfg, header, err := loadTestConfiguration(filename)
require.NoError(t, err)
// Get the input metrics
metrics, err := testutil.ParseMetricsFrom(header, "Input:", parser)
require.NoError(t, err)
// Get the expectations
expected, err := loadJSON(strings.TrimSuffix(filename, ".conf") + "_out.json")
require.NoError(t, err)
// Serialize
serializer := Serializer{
TimestampUnits: config.Duration(cfg.TimestampUnits),
TimestampFormat: cfg.TimestampFormat,
Transformation: cfg.Transformation,
}
require.NoError(t, serializer.Init())
buf, err := serializer.SerializeBatch(metrics)
require.NoError(t, err)
// Compare
var actual interface{}
require.NoError(t, json.Unmarshal(buf, &actual))
require.EqualValues(t, expected, actual)
})
}
}
func TestSerializeTransformationIssue12734(t *testing.T) {
input := []telegraf.Metric{
metric.New(
"data",
map[string]string{"key": "a"},
map[string]interface{}{"value": 10.1},
time.Unix(0, 1676285135457000000),
),
metric.New(
"data",
map[string]string{"key": "b"},
map[string]interface{}{"value": 20.2},
time.Unix(0, 1676285135457000000),
),
metric.New(
"data",
map[string]string{"key": "c"},
map[string]interface{}{"value": 30.3},
time.Unix(0, 1676285135457000000),
),
}
transformation := `
{
"valueRows": metrics{$string(timestamp): fields.value[]} ~> $each(function($v, $k) {
{
"timestamp": $number($k),
"values": $v
}
})
}
`
expected := map[string]interface{}{
"valueRows": map[string]interface{}{
"timestamp": 1.676285135e+9,
"values": []interface{}{10.1, 20.2, 30.3},
},
}
// Setup serializer
serializer := Serializer{
Transformation: transformation,
}
require.NoError(t, serializer.Init())
// Check multiple serializations as issue #12734 shows that the
// transformation breaks after the first iteration
for i := 1; i <= 3; i++ {
buf, err := serializer.SerializeBatch(input)
require.NoErrorf(t, err, "broke in iteration %d", i)
// Compare
var actual interface{}
require.NoError(t, json.Unmarshal(buf, &actual))
require.EqualValuesf(t, expected, actual, "broke in iteration %d", i)
}
}
func TestSerializeNesting(t *testing.T) {
var tests = []struct {
name string
filename string
out string
}{
{
name: "nested fields include",
filename: "testcases/nested_fields_include.conf",
out: "testcases/nested_fields_out.json",
},
{
name: "nested fields exclude",
filename: "testcases/nested_fields_exclude.conf",
out: "testcases/nested_fields_out.json",
},
}
parser := &influx.Parser{}
require.NoError(t, parser.Init())
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
filename := filepath.FromSlash(tt.filename)
cfg, header, err := loadTestConfiguration(filename)
require.NoError(t, err)
// Get the input metrics
metrics, err := testutil.ParseMetricsFrom(header, "Input:", parser)
require.NoError(t, err)
require.Len(t, metrics, 1)
// Get the expectations
expectedArray, err := loadJSON(tt.out)
require.NoError(t, err)
expected := expectedArray.(map[string]interface{})
// Serialize
serializer := Serializer{
TimestampUnits: config.Duration(cfg.TimestampUnits),
TimestampFormat: cfg.TimestampFormat,
Transformation: cfg.Transformation,
NestedFieldsInclude: cfg.JSONNestedFieldsInclude,
NestedFieldsExclude: cfg.JSONNestedFieldsExclude,
}
require.NoError(t, serializer.Init())
buf, err := serializer.Serialize(metrics[0])
require.NoError(t, err)
// Compare
var actual interface{}
require.NoError(t, json.Unmarshal(buf, &actual))
require.EqualValues(t, expected, actual)
})
}
}
type Config struct {
TimestampUnits time.Duration `toml:"json_timestamp_units"`
TimestampFormat string `toml:"json_timestamp_format"`
Transformation string `toml:"json_transformation"`
JSONNestedFieldsInclude []string `toml:"json_nested_fields_include"`
JSONNestedFieldsExclude []string `toml:"json_nested_fields_exclude"`
}
func loadTestConfiguration(filename string) (*Config, []string, error) {
buf, err := os.ReadFile(filename)
if err != nil {
return nil, nil, err
}
header := make([]string, 0)
for _, line := range strings.Split(string(buf), "\n") {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "#") {
header = append(header, line)
}
}
var cfg Config
err = toml.Unmarshal(buf, &cfg)
return &cfg, header, err
}
func loadJSON(filename string) (interface{}, error) {
buf, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
var data interface{}
err = json.Unmarshal(buf, &data)
return data, err
}
func BenchmarkSerialize(b *testing.B) {
s := &Serializer{}
require.NoError(b, s.Init())
metrics := serializers.BenchmarkMetrics(b)
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := s.Serialize(metrics[i%len(metrics)])
require.NoError(b, err)
}
}
func BenchmarkSerializeBatch(b *testing.B) {
s := &Serializer{}
require.NoError(b, s.Init())
m := serializers.BenchmarkMetrics(b)
metrics := m[:]
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := s.SerializeBatch(metrics)
require.NoError(b, err)
}
}

View file

@ -0,0 +1,6 @@
# Example for decoding fields that contain nested JSON structures.
#
# Input:
# in,host=myhost,type=diagnostic hops=10,latency=1.23,id-1234="{\"address\": \"AB1A\", \"status\": \"online\"}",id-0000="{\"status\": \"offline\"}",id-5678="{\"address\": \"0000\", \"status\": \"online\"}" 1666006350000000000
json_nested_fields_exclude = ["hops", "latency"]

View file

@ -0,0 +1,6 @@
# Example for decoding fields that contain nested JSON structures.
#
# Input:
# in,host=myhost,type=diagnostic hops=10,latency=1.23,id-1234="{\"address\": \"AB1A\", \"status\": \"online\"}",id-0000="{\"status\": \"offline\"}",id-5678="{\"address\": \"0000\", \"status\": \"online\"}" 1666006350000000000
json_nested_fields_include = ["id-*"]

View file

@ -0,0 +1,23 @@
{
"fields": {
"id-1234": {
"address": "AB1A",
"status": "online"
},
"id-0000": {
"status": "offline"
},
"id-5678": {
"address": "0000",
"status": "online"
},
"hops": 10,
"latency": 1.23
},
"name": "in",
"tags": {
"host": "myhost",
"type": "diagnostic"
},
"timestamp": 1666006350
}

View file

@ -0,0 +1,24 @@
# Example for transforming the output JSON with batch metrics.
#
# Input:
# impression,flagname=F5,host=1cbbb3796fc2,key=12345,platform=Java,sdkver=4.9.1,value=false count_sum=5i 1653643420000000000
# expression,flagname=E42,host=klaus,key=67890,platform=Golang,sdkver=1.18.3,value=true count_sum=42i 1653646789000000000
json_transformation = '''
metrics.{
"sdkVersion": tags.sdkver,
"time": timestamp,
"platform": platform,
"key": tags.key,
"events": [
{
"time": timestamp,
"flag": tags.flagname,
"experimentVersion": 0,
"value": tags.value,
"type": $uppercase(name),
"count": fields.count_sum
}
]
}
'''

View file

@ -0,0 +1,32 @@
[
{
"sdkVersion": "4.9.1",
"time": 1653643420,
"key": "12345",
"events": [
{
"time": 1653643420,
"flag": "F5",
"experimentVersion": 0,
"value": "false",
"type": "IMPRESSION",
"count": 5
}
]
},
{
"sdkVersion": "1.18.3",
"time": 1653646789,
"key": "67890",
"events": [
{
"time": 1653646789,
"flag": "E42",
"experimentVersion": 0,
"value": "true",
"type": "EXPRESSION",
"count": 42
}
]
}
]

View file

@ -0,0 +1,24 @@
# Example for transforming the output JSON in non-batch mode.
#
# Input:
# impression,flagname=F5,host=1cbbb3796fc2,key=12345,platform=Java,sdkver=4.9.1,value=false count_sum=5i 1653643420000000000
# expression,flagname=E42,host=klaus,key=67890,platform=Golang,sdkver=1.18.3,value=true count_sum=42i 1653646789000000000
json_transformation = '''
{
"sdkVersion": tags.sdkver,
"time": timestamp,
"platform": platform,
"key": tags.key,
"events": [
{
"time": timestamp,
"flag": tags.flagname,
"experimentVersion": 0,
"value": tags.value,
"type": $uppercase(name),
"count": fields.count_sum
}
]
}
'''

View file

@ -0,0 +1,32 @@
[
{
"sdkVersion": "4.9.1",
"time": 1653643420,
"key": "12345",
"events": [
{
"time": 1653643420,
"flag": "F5",
"experimentVersion": 0,
"value": "false",
"type": "IMPRESSION",
"count": 5
}
]
},
{
"sdkVersion": "1.18.3",
"time": 1653646789,
"key": "67890",
"events": [
{
"time": 1653646789,
"flag": "E42",
"experimentVersion": 0,
"value": "true",
"type": "EXPRESSION",
"count": 42
}
]
}
]