1
0
Fork 0

Adding upstream version 1.34.4.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-05-24 07:26:29 +02:00
parent e393c3af3f
commit 4978089aab
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
4963 changed files with 677545 additions and 0 deletions

View file

@ -0,0 +1,137 @@
# Google Cloud PubSub Input Plugin
This plugin consumes messages from the [Google Cloud PubSub][pubsub] service
and creates metrics using one of the supported [data formats][data_formats].
⭐ Telegraf v1.10.0
🏷️ cloud, messaging
💻 all
[pubsub]: https://cloud.google.com/pubsub
[data_formats]: /docs/DATA_FORMATS_INPUT.md
## Service Input <!-- @/docs/includes/service_input.md -->
This plugin is a service input. Normal plugins gather metrics determined by the
interval setting. Service plugins start a service to listen and wait for
metrics or events to occur. Service plugins have two key differences from
normal plugins:
1. The global or plugin specific `interval` setting may not apply
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
output for this plugin
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
additional global and plugin configuration settings. These settings are used to
modify metrics, tags, and field or create aliases and configure ordering, etc.
See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
## Configuration
```toml @sample.conf
# Read metrics from Google PubSub
[[inputs.cloud_pubsub]]
## Required. Name of Google Cloud Platform (GCP) Project that owns
## the given PubSub subscription.
project = "my-project"
## Required. Name of PubSub subscription to ingest metrics from.
subscription = "my-subscription"
## Required. Data format to consume.
## Each data format has its own unique set of configuration options.
## Read more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
## Optional. Filepath for GCP credentials JSON file to authorize calls to
## PubSub APIs. If not set explicitly, Telegraf will attempt to use
## Application Default Credentials, which is preferred.
# credentials_file = "path/to/my/creds.json"
## Optional. Number of seconds to wait before attempting to restart the
## PubSub subscription receiver after an unexpected error.
## If the streaming pull for a PubSub Subscription fails (receiver),
## the agent attempts to restart receiving messages after this many seconds.
# retry_delay_seconds = 5
## Optional. Maximum byte length of a message to consume.
## Larger messages are dropped with an error. If less than 0 or unspecified,
## treated as no limit.
# max_message_len = 1000000
## Max undelivered messages
## This plugin uses tracking metrics, which ensure messages are read to
## outputs before acknowledging them to the original broker to ensure data
## is not lost. This option sets the maximum messages to read from the
## broker that have not been written by an output.
##
## This value needs to be picked with awareness of the agent's
## metric_batch_size value as well. Setting max undelivered messages too high
## can result in a constant stream of data batches to the output. While
## setting it too low may never flush the broker's messages.
# max_undelivered_messages = 1000
## The following are optional Subscription ReceiveSettings in PubSub.
## Read more about these values:
## https://godoc.org/cloud.google.com/go/pubsub#ReceiveSettings
## Optional. Maximum number of seconds for which a PubSub subscription
## should auto-extend the PubSub ACK deadline for each message. If less than
## 0, auto-extension is disabled.
# max_extension = 0
## Optional. Maximum number of unprocessed messages in PubSub
## (unacknowledged but not yet expired in PubSub).
## A value of 0 is treated as the default PubSub value.
## Negative values will be treated as unlimited.
# max_outstanding_messages = 0
## Optional. Maximum size in bytes of unprocessed messages in PubSub
## (unacknowledged but not yet expired in PubSub).
## A value of 0 is treated as the default PubSub value.
## Negative values will be treated as unlimited.
# max_outstanding_bytes = 0
## Optional. Max number of goroutines a PubSub Subscription receiver can spawn
## to pull messages from PubSub concurrently. This limit applies to each
## subscription separately and is treated as the PubSub default if less than
## 1. Note this setting does not limit the number of messages that can be
## processed concurrently (use "max_outstanding_messages" instead).
# max_receiver_go_routines = 0
## Optional. If true, Telegraf will attempt to base64 decode the
## PubSub message data before parsing. Many GCP services that
## output JSON to Google PubSub base64-encode the JSON payload.
# base64_data = false
## Content encoding for message payloads, can be set to "gzip" or
## "identity" to apply no encoding.
# content_encoding = "identity"
## If content encoding is not "identity", sets the maximum allowed size,
## in bytes, for a message payload when it's decompressed. Can be increased
## for larger payloads or reduced to protect against decompression bombs.
## Acceptable units are B, KiB, KB, MiB, MB...
# max_decompression_size = "500MB"
```
### Multiple Subscriptions and Topics
This plugin assumes you have already created a PULL subscription for a given
PubSub topic. To learn how to do so, see [how to create a subscription][pubsub
create sub].
Each plugin agent can listen to one subscription at a time, so you will
need to run multiple instances of the plugin to pull messages from multiple
subscriptions/topics.
[pubsub create sub]: https://cloud.google.com/pubsub/docs/admin#create_a_pull_subscription
## Metrics
## Example Output

View file

@ -0,0 +1,357 @@
//go:generate ../../../tools/readme_config_includer/generator
package cloud_pubsub
import (
"context"
_ "embed"
"encoding/base64"
"errors"
"fmt"
"sync"
"time"
"cloud.google.com/go/pubsub"
"golang.org/x/oauth2/google"
"google.golang.org/api/option"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/config"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
//go:embed sample.conf
var sampleConfig string
var once sync.Once
const (
defaultMaxUndeliveredMessages = 1000
defaultRetryDelaySeconds = 5
)
type PubSub struct {
sync.Mutex
CredentialsFile string `toml:"credentials_file"`
Project string `toml:"project"`
Subscription string `toml:"subscription"`
// Subscription ReceiveSettings
MaxExtension config.Duration `toml:"max_extension"`
MaxOutstandingMessages int `toml:"max_outstanding_messages"`
MaxOutstandingBytes int `toml:"max_outstanding_bytes"`
MaxReceiverGoRoutines int `toml:"max_receiver_go_routines"`
// Agent settings
MaxMessageLen int `toml:"max_message_len"`
MaxUndeliveredMessages int `toml:"max_undelivered_messages"`
RetryReceiveDelaySeconds int `toml:"retry_delay_seconds"`
Base64Data bool `toml:"base64_data"`
ContentEncoding string `toml:"content_encoding"`
MaxDecompressionSize config.Size `toml:"max_decompression_size"`
Log telegraf.Logger `toml:"-"`
sub subscription
stubSub func() subscription
cancel context.CancelFunc
parser telegraf.Parser
wg *sync.WaitGroup
acc telegraf.TrackingAccumulator
undelivered map[telegraf.TrackingID]message
sem semaphore
decoder internal.ContentDecoder
decoderMutex sync.Mutex
}
type (
empty struct{}
semaphore chan empty
)
func (*PubSub) SampleConfig() string {
return sampleConfig
}
func (ps *PubSub) Init() error {
if ps.Subscription == "" {
return errors.New(`"subscription" is required`)
}
if ps.Project == "" {
return errors.New(`"project" is required`)
}
switch ps.ContentEncoding {
case "", "identity":
ps.ContentEncoding = "identity"
case "gzip":
var err error
var options []internal.DecodingOption
if ps.MaxDecompressionSize > 0 {
options = append(options, internal.WithMaxDecompressionSize(int64(ps.MaxDecompressionSize)))
}
ps.decoder, err = internal.NewContentDecoder(ps.ContentEncoding, options...)
if err != nil {
return err
}
default:
return fmt.Errorf("invalid value %q for content_encoding", ps.ContentEncoding)
}
return nil
}
func (ps *PubSub) SetParser(parser telegraf.Parser) {
ps.parser = parser
}
// Start initializes the plugin and processing messages from Google PubSub.
// Two goroutines are started - one pulling for the subscription, one
// receiving delivery notifications from the accumulator.
func (ps *PubSub) Start(ac telegraf.Accumulator) error {
ps.sem = make(semaphore, ps.MaxUndeliveredMessages)
ps.acc = ac.WithTracking(ps.MaxUndeliveredMessages)
// Create top-level context with cancel that will be called on Stop().
ctx, cancel := context.WithCancel(context.Background())
ps.cancel = cancel
if ps.stubSub != nil {
ps.sub = ps.stubSub()
} else {
subRef, err := ps.getGCPSubscription(ps.Subscription)
if err != nil {
return fmt.Errorf("unable to create subscription handle: %w", err)
}
ps.sub = subRef
}
ps.wg = &sync.WaitGroup{}
// Start goroutine to handle delivery notifications from accumulator.
ps.wg.Add(1)
go func() {
defer ps.wg.Done()
ps.waitForDelivery(ctx)
}()
// Start goroutine for subscription receiver.
ps.wg.Add(1)
go func() {
defer ps.wg.Done()
ps.receiveWithRetry(ctx)
}()
return nil
}
// Gather does nothing for this service input.
func (*PubSub) Gather(telegraf.Accumulator) error {
return nil
}
// Stop ensures the PubSub subscriptions receivers are stopped by
// canceling the context and waits for goroutines to finish.
func (ps *PubSub) Stop() {
ps.cancel()
ps.wg.Wait()
}
// startReceiver is called within a goroutine and manages keeping a
// subscription.Receive() up and running while the plugin has not been stopped.
func (ps *PubSub) receiveWithRetry(parentCtx context.Context) {
err := ps.startReceiver(parentCtx)
for err != nil && parentCtx.Err() == nil {
ps.Log.Errorf("Receiver for subscription %s exited with error: %v", ps.sub.ID(), err)
delay := defaultRetryDelaySeconds
if ps.RetryReceiveDelaySeconds > 0 {
delay = ps.RetryReceiveDelaySeconds
}
ps.Log.Infof("Waiting %d seconds before attempting to restart receiver...", delay)
time.Sleep(time.Duration(delay) * time.Second)
err = ps.startReceiver(parentCtx)
}
}
func (ps *PubSub) startReceiver(parentCtx context.Context) error {
ps.Log.Infof("Starting receiver for subscription %s...", ps.sub.ID())
cctx, ccancel := context.WithCancel(parentCtx)
err := ps.sub.Receive(cctx, func(ctx context.Context, msg message) {
if err := ps.onMessage(ctx, msg); err != nil {
ps.acc.AddError(fmt.Errorf("unable to add message from subscription %s: %w", ps.sub.ID(), err))
}
})
if err != nil {
ps.acc.AddError(fmt.Errorf("receiver for subscription %s exited: %w", ps.sub.ID(), err))
} else {
ps.Log.Info("Subscription pull ended (no error, most likely stopped)")
}
ccancel()
return err
}
// onMessage handles parsing and adding a received message to the accumulator.
func (ps *PubSub) onMessage(ctx context.Context, msg message) error {
if ps.MaxMessageLen > 0 && len(msg.Data()) > ps.MaxMessageLen {
msg.Ack()
return fmt.Errorf("message longer than max_message_len (%d > %d)", len(msg.Data()), ps.MaxMessageLen)
}
data, err := ps.decompressData(msg.Data())
if err != nil {
return fmt.Errorf("unable to decompress %s message: %w", ps.ContentEncoding, err)
}
data, err = ps.decodeB64Data(data)
if err != nil {
return fmt.Errorf("unable to decode base64 message: %w", err)
}
metrics, err := ps.parser.Parse(data)
if err != nil {
msg.Ack()
return fmt.Errorf("unable to parse message: %w", err)
}
if len(metrics) == 0 {
msg.Ack()
once.Do(func() {
ps.Log.Debug(internal.NoMetricsCreatedMsg)
})
return nil
}
select {
case <-ctx.Done():
return ctx.Err()
case ps.sem <- empty{}:
break
}
ps.Lock()
defer ps.Unlock()
id := ps.acc.AddTrackingMetricGroup(metrics)
if ps.undelivered == nil {
ps.undelivered = make(map[telegraf.TrackingID]message)
}
ps.undelivered[id] = msg
return nil
}
func (ps *PubSub) decompressData(data []byte) ([]byte, error) {
if ps.ContentEncoding == "identity" {
return data, nil
}
ps.decoderMutex.Lock()
defer ps.decoderMutex.Unlock()
data, err := ps.decoder.Decode(data)
if err != nil {
return nil, err
}
decompressedData := make([]byte, len(data))
copy(decompressedData, data)
data = decompressedData
return data, nil
}
func (ps *PubSub) decodeB64Data(data []byte) ([]byte, error) {
if ps.Base64Data {
return base64.StdEncoding.DecodeString(string(data))
}
return data, nil
}
func (ps *PubSub) waitForDelivery(parentCtx context.Context) {
for {
select {
case <-parentCtx.Done():
return
case info := <-ps.acc.Delivered():
<-ps.sem
msg := ps.removeDelivered(info.ID())
if msg != nil {
msg.Ack()
}
}
}
}
func (ps *PubSub) removeDelivered(id telegraf.TrackingID) message {
ps.Lock()
defer ps.Unlock()
msg, ok := ps.undelivered[id]
if !ok {
return nil
}
delete(ps.undelivered, id)
return msg
}
func (ps *PubSub) getPubSubClient() (*pubsub.Client, error) {
var credsOpt option.ClientOption
if ps.CredentialsFile != "" {
credsOpt = option.WithCredentialsFile(ps.CredentialsFile)
} else {
creds, err := google.FindDefaultCredentials(context.Background(), pubsub.ScopeCloudPlatform)
if err != nil {
return nil, fmt.Errorf(
"unable to find GCP Application Default Credentials: %v."+
"Either set ADC or provide CredentialsFile config", err)
}
credsOpt = option.WithCredentials(creds)
}
client, err := pubsub.NewClient(
context.Background(),
ps.Project,
credsOpt,
option.WithScopes(pubsub.ScopeCloudPlatform),
option.WithUserAgent(internal.ProductToken()),
)
if err != nil {
return nil, fmt.Errorf("unable to generate PubSub client: %w", err)
}
return client, nil
}
func (ps *PubSub) getGCPSubscription(subID string) (subscription, error) {
client, err := ps.getPubSubClient()
if err != nil {
return nil, err
}
s := client.Subscription(subID)
s.ReceiveSettings = pubsub.ReceiveSettings{
NumGoroutines: ps.MaxReceiverGoRoutines,
MaxExtension: time.Duration(ps.MaxExtension),
MaxOutstandingMessages: ps.MaxOutstandingMessages,
MaxOutstandingBytes: ps.MaxOutstandingBytes,
}
return &gcpSubscription{s}, nil
}
func init() {
inputs.Add("cloud_pubsub", func() telegraf.Input {
ps := &PubSub{
MaxUndeliveredMessages: defaultMaxUndeliveredMessages,
}
return ps
})
}

View file

@ -0,0 +1,300 @@
package cloud_pubsub
import (
"encoding/base64"
"errors"
"testing"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/parsers/influx"
"github.com/influxdata/telegraf/testutil"
)
const (
msgInflux = "cpu_load_short,host=server01 value=23422.0 1422568543702900257\n"
)
// Test ingesting InfluxDB-format PubSub message
func TestRunParse(t *testing.T) {
subID := "sub-run-parse"
testParser := &influx.Parser{}
require.NoError(t, testParser.Init())
sub := &stubSub{
id: subID,
messages: make(chan *testMsg, 100),
}
sub.receiver = testMessagesReceive(sub)
decoder, err := internal.NewContentDecoder("identity")
require.NoError(t, err)
ps := &PubSub{
Log: testutil.Logger{},
parser: testParser,
stubSub: func() subscription { return sub },
Project: "projectIDontMatterForTests",
Subscription: subID,
MaxUndeliveredMessages: defaultMaxUndeliveredMessages,
decoder: decoder,
}
acc := &testutil.Accumulator{}
require.NoError(t, ps.Init())
require.NoError(t, ps.Start(acc))
defer ps.Stop()
require.NotNil(t, ps.sub)
testTracker := &testTracker{}
msg := &testMsg{
value: msgInflux,
tracker: testTracker,
}
sub.messages <- msg
acc.Wait(1)
require.Equal(t, 1, acc.NFields())
metric := acc.Metrics[0]
validateTestInfluxMetric(t, metric)
}
// Test ingesting InfluxDB-format PubSub message
func TestRunBase64(t *testing.T) {
subID := "sub-run-base64"
testParser := &influx.Parser{}
require.NoError(t, testParser.Init())
sub := &stubSub{
id: subID,
messages: make(chan *testMsg, 100),
}
sub.receiver = testMessagesReceive(sub)
decoder, err := internal.NewContentDecoder("identity")
require.NoError(t, err)
ps := &PubSub{
Log: testutil.Logger{},
parser: testParser,
stubSub: func() subscription { return sub },
Project: "projectIDontMatterForTests",
Subscription: subID,
MaxUndeliveredMessages: defaultMaxUndeliveredMessages,
Base64Data: true,
decoder: decoder,
}
acc := &testutil.Accumulator{}
require.NoError(t, ps.Init())
require.NoError(t, ps.Start(acc))
defer ps.Stop()
require.NotNil(t, ps.sub)
testTracker := &testTracker{}
msg := &testMsg{
value: base64.StdEncoding.EncodeToString([]byte(msgInflux)),
tracker: testTracker,
}
sub.messages <- msg
acc.Wait(1)
require.Equal(t, 1, acc.NFields())
metric := acc.Metrics[0]
validateTestInfluxMetric(t, metric)
}
func TestRunGzipDecode(t *testing.T) {
subID := "sub-run-gzip"
testParser := &influx.Parser{}
require.NoError(t, testParser.Init())
sub := &stubSub{
id: subID,
messages: make(chan *testMsg, 100),
}
sub.receiver = testMessagesReceive(sub)
decoder, err := internal.NewContentDecoder("gzip")
require.NoError(t, err)
ps := &PubSub{
Log: testutil.Logger{},
parser: testParser,
stubSub: func() subscription { return sub },
Project: "projectIDontMatterForTests",
Subscription: subID,
MaxUndeliveredMessages: defaultMaxUndeliveredMessages,
ContentEncoding: "gzip",
decoder: decoder,
}
acc := &testutil.Accumulator{}
require.NoError(t, ps.Init())
require.NoError(t, ps.Start(acc))
defer ps.Stop()
require.NotNil(t, ps.sub)
testTracker := &testTracker{}
enc, err := internal.NewGzipEncoder()
require.NoError(t, err)
gzippedMsg, err := enc.Encode([]byte(msgInflux))
require.NoError(t, err)
msg := &testMsg{
value: string(gzippedMsg),
tracker: testTracker,
}
sub.messages <- msg
acc.Wait(1)
require.Equal(t, 1, acc.NFields())
metric := acc.Metrics[0]
validateTestInfluxMetric(t, metric)
}
func TestRunInvalidMessages(t *testing.T) {
subID := "sub-invalid-messages"
testParser := &influx.Parser{}
require.NoError(t, testParser.Init())
sub := &stubSub{
id: subID,
messages: make(chan *testMsg, 100),
}
sub.receiver = testMessagesReceive(sub)
decoder, err := internal.NewContentDecoder("identity")
require.NoError(t, err)
ps := &PubSub{
Log: testutil.Logger{},
parser: testParser,
stubSub: func() subscription { return sub },
Project: "projectIDontMatterForTests",
Subscription: subID,
MaxUndeliveredMessages: defaultMaxUndeliveredMessages,
decoder: decoder,
}
acc := &testutil.Accumulator{}
require.NoError(t, ps.Init())
require.NoError(t, ps.Start(acc))
defer ps.Stop()
require.NotNil(t, ps.sub)
testTracker := &testTracker{}
msg := &testMsg{
value: "~invalidInfluxMsg~",
tracker: testTracker,
}
sub.messages <- msg
acc.WaitError(1)
// Make sure we acknowledged message so we don't receive it again.
testTracker.waitForAck(1)
require.Equal(t, 0, acc.NFields())
}
func TestRunOverlongMessages(t *testing.T) {
subID := "sub-message-too-long"
acc := &testutil.Accumulator{}
testParser := &influx.Parser{}
require.NoError(t, testParser.Init())
sub := &stubSub{
id: subID,
messages: make(chan *testMsg, 100),
}
sub.receiver = testMessagesReceive(sub)
decoder, err := internal.NewContentDecoder("identity")
require.NoError(t, err)
ps := &PubSub{
Log: testutil.Logger{},
parser: testParser,
stubSub: func() subscription { return sub },
Project: "projectIDontMatterForTests",
Subscription: subID,
MaxUndeliveredMessages: defaultMaxUndeliveredMessages,
decoder: decoder,
// Add MaxMessageLen Param
MaxMessageLen: 1,
}
require.NoError(t, ps.Init())
require.NoError(t, ps.Start(acc))
defer ps.Stop()
require.NotNil(t, ps.sub)
testTracker := &testTracker{}
msg := &testMsg{
value: msgInflux,
tracker: testTracker,
}
sub.messages <- msg
acc.WaitError(1)
// Make sure we acknowledged message so we don't receive it again.
testTracker.waitForAck(1)
require.Equal(t, 0, acc.NFields())
}
func TestRunErrorInSubscriber(t *testing.T) {
subID := "sub-unexpected-error"
acc := &testutil.Accumulator{}
testParser := &influx.Parser{}
require.NoError(t, testParser.Init())
sub := &stubSub{
id: subID,
messages: make(chan *testMsg, 100),
}
fakeErrStr := "a fake error"
sub.receiver = testMessagesError(errors.New("a fake error"))
decoder, err := internal.NewContentDecoder("identity")
require.NoError(t, err)
ps := &PubSub{
Log: testutil.Logger{},
parser: testParser,
stubSub: func() subscription { return sub },
Project: "projectIDontMatterForTests",
Subscription: subID,
MaxUndeliveredMessages: defaultMaxUndeliveredMessages,
decoder: decoder,
RetryReceiveDelaySeconds: 1,
}
require.NoError(t, ps.Init())
require.NoError(t, ps.Start(acc))
defer ps.Stop()
require.NotNil(t, ps.sub)
acc.WaitError(1)
require.Regexp(t, fakeErrStr, acc.Errors[0])
}
func validateTestInfluxMetric(t *testing.T, m *testutil.Metric) {
require.Equal(t, "cpu_load_short", m.Measurement)
require.Equal(t, "server01", m.Tags["host"])
require.InDelta(t, 23422.0, m.Fields["value"], testutil.DefaultDelta)
require.Equal(t, int64(1422568543702900257), m.Time.UnixNano())
}

View file

@ -0,0 +1,85 @@
# Read metrics from Google PubSub
[[inputs.cloud_pubsub]]
## Required. Name of Google Cloud Platform (GCP) Project that owns
## the given PubSub subscription.
project = "my-project"
## Required. Name of PubSub subscription to ingest metrics from.
subscription = "my-subscription"
## Required. Data format to consume.
## Each data format has its own unique set of configuration options.
## Read more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
## Optional. Filepath for GCP credentials JSON file to authorize calls to
## PubSub APIs. If not set explicitly, Telegraf will attempt to use
## Application Default Credentials, which is preferred.
# credentials_file = "path/to/my/creds.json"
## Optional. Number of seconds to wait before attempting to restart the
## PubSub subscription receiver after an unexpected error.
## If the streaming pull for a PubSub Subscription fails (receiver),
## the agent attempts to restart receiving messages after this many seconds.
# retry_delay_seconds = 5
## Optional. Maximum byte length of a message to consume.
## Larger messages are dropped with an error. If less than 0 or unspecified,
## treated as no limit.
# max_message_len = 1000000
## Max undelivered messages
## This plugin uses tracking metrics, which ensure messages are read to
## outputs before acknowledging them to the original broker to ensure data
## is not lost. This option sets the maximum messages to read from the
## broker that have not been written by an output.
##
## This value needs to be picked with awareness of the agent's
## metric_batch_size value as well. Setting max undelivered messages too high
## can result in a constant stream of data batches to the output. While
## setting it too low may never flush the broker's messages.
# max_undelivered_messages = 1000
## The following are optional Subscription ReceiveSettings in PubSub.
## Read more about these values:
## https://godoc.org/cloud.google.com/go/pubsub#ReceiveSettings
## Optional. Maximum number of seconds for which a PubSub subscription
## should auto-extend the PubSub ACK deadline for each message. If less than
## 0, auto-extension is disabled.
# max_extension = 0
## Optional. Maximum number of unprocessed messages in PubSub
## (unacknowledged but not yet expired in PubSub).
## A value of 0 is treated as the default PubSub value.
## Negative values will be treated as unlimited.
# max_outstanding_messages = 0
## Optional. Maximum size in bytes of unprocessed messages in PubSub
## (unacknowledged but not yet expired in PubSub).
## A value of 0 is treated as the default PubSub value.
## Negative values will be treated as unlimited.
# max_outstanding_bytes = 0
## Optional. Max number of goroutines a PubSub Subscription receiver can spawn
## to pull messages from PubSub concurrently. This limit applies to each
## subscription separately and is treated as the PubSub default if less than
## 1. Note this setting does not limit the number of messages that can be
## processed concurrently (use "max_outstanding_messages" instead).
# max_receiver_go_routines = 0
## Optional. If true, Telegraf will attempt to base64 decode the
## PubSub message data before parsing. Many GCP services that
## output JSON to Google PubSub base64-encode the JSON payload.
# base64_data = false
## Content encoding for message payloads, can be set to "gzip" or
## "identity" to apply no encoding.
# content_encoding = "identity"
## If content encoding is not "identity", sets the maximum allowed size,
## in bytes, for a message payload when it's decompressed. Can be increased
## for larger payloads or reduced to protect against decompression bombs.
## Acceptable units are B, KiB, KB, MiB, MB...
# max_decompression_size = "500MB"

View file

@ -0,0 +1,85 @@
package cloud_pubsub
import (
"context"
"time"
"cloud.google.com/go/pubsub"
)
type (
subscription interface {
// ID returns the unique identifier of the subscription.
ID() string
// Receive starts receiving messages from the subscription and processes them using the provided function.
Receive(ctx context.Context, f func(context.Context, message)) error
}
message interface {
// Ack acknowledges the message, indicating successful processing.
Ack()
// Nack negatively acknowledges the message, indicating it should be redelivered.
Nack()
// ID returns the unique identifier of the message.
ID() string
// Data returns the payload of the message.
Data() []byte
// Attributes returns the attributes of the message as a key-value map.
Attributes() map[string]string
// PublishTime returns the time when the message was published.
PublishTime() time.Time
}
gcpSubscription struct {
sub *pubsub.Subscription
}
gcpMessage struct {
msg *pubsub.Message
}
)
// ID returns the unique identifier of the subscription.
func (s *gcpSubscription) ID() string {
if s.sub == nil {
return ""
}
return s.sub.ID()
}
// Receive starts receiving messages from the subscription and processes them using the provided function.
func (s *gcpSubscription) Receive(ctx context.Context, f func(context.Context, message)) error {
return s.sub.Receive(ctx, func(cctx context.Context, m *pubsub.Message) {
f(cctx, &gcpMessage{m})
})
}
// Ack acknowledges the message, indicating successful processing.
func (env *gcpMessage) Ack() {
env.msg.Ack()
}
// Nack negatively acknowledges the message, indicating it should be redelivered.
func (env *gcpMessage) Nack() {
env.msg.Nack()
}
// ID returns the unique identifier of the message.
func (env *gcpMessage) ID() string {
return env.msg.ID
}
// Data returns the payload of the message.
func (env *gcpMessage) Data() []byte {
return env.msg.Data
}
// Attributes returns the attributes of the message as a key-value map.
func (env *gcpMessage) Attributes() map[string]string {
return env.msg.Attributes
}
// PublishTime returns the time when the message was published.
func (env *gcpMessage) PublishTime() time.Time {
return env.msg.PublishTime
}

View file

@ -0,0 +1,108 @@
package cloud_pubsub
import (
"context"
"sync"
"time"
)
type stubSub struct {
id string
messages chan *testMsg
receiver receiveFunc
}
func (s *stubSub) ID() string {
return s.id
}
func (s *stubSub) Receive(ctx context.Context, f func(context.Context, message)) error {
return s.receiver(ctx, f)
}
type receiveFunc func(ctx context.Context, f func(context.Context, message)) error
func testMessagesError(expectedErr error) receiveFunc {
return func(context.Context, func(context.Context, message)) error {
return expectedErr
}
}
func testMessagesReceive(s *stubSub) receiveFunc {
return func(ctx context.Context, f func(context.Context, message)) error {
for {
select {
case <-ctx.Done():
return ctx.Err()
case m := <-s.messages:
f(ctx, m)
}
}
}
}
type testMsg struct {
id string
value string
attributes map[string]string
publishTime time.Time
tracker *testTracker
}
func (tm *testMsg) Ack() {
tm.tracker.ack()
}
func (tm *testMsg) Nack() {
tm.tracker.nack()
}
func (tm *testMsg) ID() string {
return tm.id
}
func (tm *testMsg) Data() []byte {
return []byte(tm.value)
}
func (tm *testMsg) Attributes() map[string]string {
return tm.attributes
}
func (tm *testMsg) PublishTime() time.Time {
return tm.publishTime
}
type testTracker struct {
sync.Mutex
*sync.Cond
numAcks int
numNacks int
}
func (t *testTracker) waitForAck(num int) {
t.Lock()
if t.Cond == nil {
t.Cond = sync.NewCond(&t.Mutex)
}
for t.numAcks < num {
t.Wait()
}
t.Unlock()
}
func (t *testTracker) ack() {
t.Lock()
defer t.Unlock()
t.numAcks++
}
func (t *testTracker) nack() {
t.Lock()
defer t.Unlock()
t.numNacks++
}