Skip to content

Commit

Permalink
[8.17](backport #41834) [filebeat][gcs] - Refactor & cleanup with upd…
Browse files Browse the repository at this point in the history
…ates to some default values and docs (#41986)

* [filebeat][gcs] - Refactor & cleanup with updates to some default values and docs (#41834)

(cherry picked from commit 01cc134)

* Update CHANGELOG.next.asciidoc

---------

Co-authored-by: ShourieG <shourie.ganguly@elastic.co>
  • Loading branch information
mergify[bot] and ShourieG authored Dec 11, 2024
1 parent 779f139 commit 2ae64cc
Show file tree
Hide file tree
Showing 7 changed files with 46 additions and 60 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.next.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -323,6 +323,7 @@ https://github.com/elastic/beats/compare/v8.8.1\...main[Check the HEAD diff]
- Improve S3 polling mode states registry when using list prefix option. {pull}41869[41869]
- AWS S3 input registry cleanup for untracked s3 objects. {pull}41694[41694]
- The environment variable `BEATS_AZURE_EVENTHUB_INPUT_TRACING_ENABLED: true` enables internal logs tracer for the azure-eventhub input. {issue}41931[41931] {pull}41932[41932]
- Refactor & cleanup with updates to default values and documentation. {pull}41834[41834]

*Auditbeat*

Expand Down
18 changes: 7 additions & 11 deletions x-pack/filebeat/docs/inputs/input-gcs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,7 @@
++++

Use the `google cloud storage input` to read content from files stored in buckets which reside on your Google Cloud.
The input can be configured to work with and without polling, though currently, if polling is disabled it will only
perform a one time passthrough, list the file contents and end the process. Polling is generally recommented for most cases
even though it can get expensive with dealing with a very large number of files.
The input can be configured to work with and without polling, though if polling is disabled it will only perform a single collection of data, list the file contents and end the process.

*To mitigate errors and ensure a stable processing environment, this input employs the following features :*

Expand Down Expand Up @@ -66,12 +64,11 @@ many buckets as we deem fit. We are also able to configure the attributes `max_w
then be applied to all buckets which do not specify any of these attributes explicitly.

NOTE: If the attributes `max_workers`, `poll`, `poll_interval` and `bucket_timeout` are specified at the root level, these can still be overridden at the bucket level with
different values, thus offering extensive flexibility and customization. Examples <<bucket-overrides,below>> show this behaviour.
different values, thus offering extensive flexibility and customization. Examples <<bucket-overrides,below>> show this behavior.

On receiving this config the google cloud storage input will connect to the service and retrieve a `Storage Client` using the given `bucket_name` and
`auth.credentials_file`, then it will spawn two main go-routines, one for each bucket. After this each of these routines (threads) will initialize a scheduler
which will in turn use the `max_workers` value to initialize an in-memory worker pool (thread pool) with `3` `workers` available. Basically that equates to two instances of a worker pool,
one per bucket, each having 3 workers. These `workers` will be responsible for performing `jobs` that process a file (in this case read and output the contents of a file).
which will in turn use the `max_workers` value to initialize an in-memory worker pool (thread pool) with `3` `workers` available. Basically that equates to two instances of a worker pool, one per bucket, each having 3 workers. These `workers` will be responsible for performing `jobs` that process a file (in this case read and output the contents of a file).

NOTE: The scheduler is responsible for scheduling jobs, and uses the `maximum available workers` in the pool, at each iteration, to decide the number of files to retrieve and
process. This keeps work distribution efficient. The scheduler uses `poll_interval` attribute value to decide how long to wait after each iteration. The `bucket_timeout` value is used to timeout calls to the bucket list api if it exceeds the given value. Each iteration consists of processing a certain number of files, decided by the `maximum available workers` value.
Expand Down Expand Up @@ -213,7 +210,7 @@ This is a specific subfield of a bucket. It specifies the bucket name.

This attribute defines the maximum amount of time after which a bucket operation will give and stop if no response is recieved (example: reading a file / listing a file).
It can be defined in the following formats : `{{x}}s`, `{{x}}m`, `{{x}}h`, here `s = seconds`, `m = minutes` and `h = hours`. The value `{{x}}` can be anything we wish.
If no value is specified for this, by default its initialized to `50 seconds`. This attribute can be specified both at the root level of the configuration as well at the bucket level. The bucket level values will always take priority and override the root level values if both are specified. The value of `bucket_timeout` that should be used depends on the size of the files and the network speed. If the timeout is too low, the input will not be able to read the file completely and `context_deadline_exceeded` errors will be seen in the logs. If the timeout is too high, the input will wait for a long time for the file to be read, which can cause the input to be slow. The ratio between the `bucket_timeout` and `poll_interval` should be considered while setting both the values. A low `poll_interval` and a very high `bucket_timeout` can cause resource utilization issues as schedule ops will be spawned every poll iteration. If previous poll ops are still running, this could result in concurrently running ops and so could cause a bottleneck over time.
If no value is specified for this, by default its initialized to `120 seconds`. This attribute can be specified both at the root level of the configuration as well at the bucket level. The bucket level values will always take priority and override the root level values if both are specified. The value of `bucket_timeout` that should be used depends on the size of the files and the network speed. If the timeout is too low, the input will not be able to read the file completely and `context_deadline_exceeded` errors will be seen in the logs. If the timeout is too high, the input will wait for a long time for the file to be read, which can cause the input to be slow. The ratio between the `bucket_timeout` and `poll_interval` should be considered while setting both the values. A low `poll_interval` and a very high `bucket_timeout` can cause resource utilization issues as schedule ops will be spawned every poll iteration. If previous poll ops are still running, this could result in concurrently running ops and so could cause a bottleneck over time.

[id="attrib-max_workers-gcs"]
[float]
Expand All @@ -228,17 +225,16 @@ NOTE: The value of `max_workers` is tied to the `batch_size` currently to ensure
[float]
==== `poll`

This attribute informs the scheduler whether to keep polling for new files or not. Default value of this is `false`, so it will not keep polling if not explicitly
specified. This attribute can be specified both at the root level of the configuration as well at the bucket level. The bucket level values will always
take priority and override the root level values if both are specified.
This attribute informs the scheduler whether to keep polling for new files or not. Default value of this is set to `true`. This attribute can be specified both at the
root level of the configuration as well at the bucket level. The bucket level values will always take priority and override the root level values if both are specified.

[id="attrib-poll_interval-gcs"]
[float]
==== `poll_interval`

This attribute defines the maximum amount of time after which the internal scheduler will make the polling call for the next set of objects/files. It can be
defined in the following formats : `{{x}}s`, `{{x}}m`, `{{x}}h`, here `s = seconds`, `m = minutes` and `h = hours`. The value `{{x}}` can be anything we wish.
Example : `10s` would mean we would like the polling to occur every 10 seconds. If no value is specified for this, by default its initialized to `300 seconds`.
Example : `10s` would mean we would like the polling to occur every 10 seconds. If no value is specified for this, by default its initialized to `5 minutes`.
This attribute can be specified both at the root level of the configuration as well at the bucket level. The bucket level values will always take priority
and override the root level values if both are specified. The `poll_interval` should be set to a value that is equal to the `bucket_timeout` value. This would ensure that another schedule operation is not started before the current buckets have all been processed. If the `poll_interval` is set to a value that is less than the `bucket_timeout`, then the input will start another schedule operation before the current one has finished, which can cause a bottleneck over time. Having a lower `poll_interval` can make the input faster at the cost of more resource utilization.

Expand Down
4 changes: 1 addition & 3 deletions x-pack/filebeat/input/gcs/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,9 @@ import (
"cloud.google.com/go/storage"
"golang.org/x/oauth2/google"
"google.golang.org/api/option"

"github.com/elastic/elastic-agent-libs/logp"
)

func fetchStorageClient(ctx context.Context, cfg config, log *logp.Logger) (*storage.Client, error) {
func fetchStorageClient(ctx context.Context, cfg config) (*storage.Client, error) {
if cfg.AlternativeHost != "" {
var h *url.URL
h, err := url.Parse(cfg.AlternativeHost)
Expand Down
37 changes: 25 additions & 12 deletions x-pack/filebeat/input/gcs/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,16 +28,16 @@ type config struct {
// Auth - Defines the authentication mechanism to be used for accessing the gcs bucket.
Auth authConfig `config:"auth"`
// MaxWorkers - Defines the maximum number of go routines that will be spawned.
MaxWorkers *int `config:"max_workers,omitempty" validate:"max=5000"`
MaxWorkers int `config:"max_workers" validate:"max=5000"`
// Poll - Defines if polling should be performed on the input bucket source.
Poll *bool `config:"poll,omitempty"`
Poll bool `config:"poll"`
// PollInterval - Defines the maximum amount of time to wait before polling for the next batch of objects from the bucket.
PollInterval *time.Duration `config:"poll_interval,omitempty"`
PollInterval time.Duration `config:"poll_interval"`
// ParseJSON - Informs the publisher whether to parse & objectify json data or not. By default this is set to
// false, since it can get expensive dealing with highly nested json data.
ParseJSON *bool `config:"parse_json,omitempty"`
ParseJSON bool `config:"parse_json"`
// BucketTimeOut - Defines the maximum time that the sdk will wait for a bucket api response before timing out.
BucketTimeOut *time.Duration `config:"bucket_timeout,omitempty"`
BucketTimeOut time.Duration `config:"bucket_timeout"`
// Buckets - Defines a list of buckets that will be polled for objects.
Buckets []bucket `config:"buckets" validate:"required"`
// FileSelectors - Defines a list of regex patterns that can be used to filter out objects from the bucket.
Expand All @@ -49,17 +49,17 @@ type config struct {
// ExpandEventListFromField - Defines the field name that will be used to expand the event into separate events.
ExpandEventListFromField string `config:"expand_event_list_from_field"`
// This field is only used for system test purposes, to override the HTTP endpoint.
AlternativeHost string `config:"alternative_host,omitempty"`
AlternativeHost string `config:"alternative_host"`
}

// bucket contains the config for each specific object storage bucket in the root account
type bucket struct {
Name string `config:"name" validate:"required"`
MaxWorkers *int `config:"max_workers,omitempty" validate:"max=5000"`
BucketTimeOut *time.Duration `config:"bucket_timeout,omitempty"`
Poll *bool `config:"poll,omitempty"`
PollInterval *time.Duration `config:"poll_interval,omitempty"`
ParseJSON *bool `config:"parse_json,omitempty"`
MaxWorkers *int `config:"max_workers" validate:"max=5000"`
BucketTimeOut *time.Duration `config:"bucket_timeout"`
Poll *bool `config:"poll"`
PollInterval *time.Duration `config:"poll_interval"`
ParseJSON *bool `config:"parse_json"`
FileSelectors []fileSelectorConfig `config:"file_selectors"`
ReaderConfig readerConfig `config:",inline"`
TimeStampEpoch *int64 `config:"timestamp_epoch"`
Expand All @@ -78,13 +78,15 @@ type readerConfig struct {
Decoding decoderConfig `config:"decoding"`
}

// authConfig defines the authentication mechanism to be used for accessing the gcs bucket.
// If either is configured the 'omitempty' tag will prevent the other option from being serialized in the config.
type authConfig struct {
CredentialsJSON *jsonCredentialsConfig `config:"credentials_json,omitempty"`
CredentialsFile *fileCredentialsConfig `config:"credentials_file,omitempty"`
}

type fileCredentialsConfig struct {
Path string `config:"path,omitempty"`
Path string `config:"path"`
}
type jsonCredentialsConfig struct {
AccountKey string `config:"account_key"`
Expand Down Expand Up @@ -115,3 +117,14 @@ func (c authConfig) Validate() error {
return fmt.Errorf("no authentication credentials were configured or detected " +
"(credentials_file, credentials_json, and application default credentials (ADC))")
}

// defaultConfig returns the default configuration for the input
func defaultConfig() config {
return config{
MaxWorkers: 1,
Poll: true,
PollInterval: 5 * time.Minute,
BucketTimeOut: 120 * time.Second,
ParseJSON: false,
}
}
39 changes: 9 additions & 30 deletions x-pack/filebeat/input/gcs/input.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ func Plugin(log *logp.Logger, store cursor.StateStore) v2.Plugin {
}

func configure(cfg *conf.C) ([]cursor.Source, cursor.Input, error) {
config := config{}
config := defaultConfig()
if err := cfg.Unpack(&config); err != nil {
return nil, nil, err
}
Expand Down Expand Up @@ -78,44 +78,22 @@ func configure(cfg *conf.C) ([]cursor.Source, cursor.Input, error) {
return sources, &gcsInput{config: config}, nil
}

// tryOverrideOrDefault, overrides global values with local
// bucket level values if present. If both global & local values
// are absent, assigns default values
// tryOverrideOrDefault, overrides the bucket level values with global values if the bucket fields are not set
func tryOverrideOrDefault(cfg config, b bucket) bucket {
if b.MaxWorkers == nil {
maxWorkers := 1
if cfg.MaxWorkers != nil {
maxWorkers = *cfg.MaxWorkers
}
b.MaxWorkers = &maxWorkers
b.MaxWorkers = &cfg.MaxWorkers
}
if b.Poll == nil {
var poll bool
if cfg.Poll != nil {
poll = *cfg.Poll
}
b.Poll = &poll
b.Poll = &cfg.Poll
}
if b.PollInterval == nil {
interval := time.Second * 300
if cfg.PollInterval != nil {
interval = *cfg.PollInterval
}
b.PollInterval = &interval
b.PollInterval = &cfg.PollInterval
}
if b.ParseJSON == nil {
parse := false
if cfg.ParseJSON != nil {
parse = *cfg.ParseJSON
}
b.ParseJSON = &parse
b.ParseJSON = &cfg.ParseJSON
}
if b.BucketTimeOut == nil {
timeOut := time.Second * 50
if cfg.BucketTimeOut != nil {
timeOut = *cfg.BucketTimeOut
}
b.BucketTimeOut = &timeOut
b.BucketTimeOut = &cfg.BucketTimeOut
}
if b.TimeStampEpoch == nil {
b.TimeStampEpoch = cfg.TimeStampEpoch
Expand Down Expand Up @@ -173,11 +151,12 @@ func (input *gcsInput) Run(inputCtx v2.Context, src cursor.Source,
cancel()
}()

client, err := fetchStorageClient(ctx, input.config, log)
client, err := fetchStorageClient(ctx, input.config)
if err != nil {
metrics.errorsTotal.Inc()
return err
}

bucket := client.Bucket(currentSource.BucketName).Retryer(
// Use WithBackoff to change the timing of the exponential backoff.
storage.WithBackoff(gax.Backoff{
Expand Down
1 change: 0 additions & 1 deletion x-pack/filebeat/input/gcs/input_stateless.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,6 @@ func (in *statelessInput) Run(inputCtx v2.Context, publisher stateless.Publisher
// Since we are only reading, the operation is always idempotent
storage.WithPolicy(storage.RetryAlways),
)

scheduler := newScheduler(pub, bkt, currentSource, &in.config, st, metrics, log)
// allows multiple containers to be scheduled concurrently while testing
// the stateless input is triggered only while testing and till now it did not mimic
Expand Down
6 changes: 3 additions & 3 deletions x-pack/filebeat/input/gcs/input_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -535,7 +535,7 @@ func Test_StorageClient(t *testing.T) {

client, _ := storage.NewClient(context.Background(), option.WithEndpoint(serv.URL), option.WithoutAuthentication(), option.WithHTTPClient(&httpclient))
cfg := conf.MustNewConfigFrom(tt.baseConfig)
conf := config{}
conf := defaultConfig()
err := cfg.Unpack(&conf)
if err != nil {
assert.EqualError(t, err, fmt.Sprint(tt.isError))
Expand All @@ -558,8 +558,8 @@ func Test_StorageClient(t *testing.T) {
})

var timeout *time.Timer
if conf.PollInterval != nil {
timeout = time.NewTimer(1*time.Second + *conf.PollInterval)
if conf.PollInterval != 0 {
timeout = time.NewTimer(1*time.Second + conf.PollInterval)
} else {
timeout = time.NewTimer(5 * time.Second)
}
Expand Down

0 comments on commit 2ae64cc

Please sign in to comment.