Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporterhelper] Add WithRequestQueue option to the exporter #8853

Merged
merged 1 commit into from
Feb 3, 2024

Conversation

dmitryax
Copy link
Member

@dmitryax dmitryax commented Nov 12, 2023

Introduce a way to enable queue in the new exporter helper with a developer interface suggested in #8248 (comment).

The new configuration interface for the end users provides a new queue_size_items option to limit the queue by a number of spans, log records, or metric data points. The previous way to limit the queue by number of requests is preserved under the same field, queue_size, which will later be deprecated through a longer transition process.

Tracking issue: #8122

@dmitryax dmitryax requested a review from a team as a code owner November 12, 2023 17:03
@dmitryax dmitryax force-pushed the request-exporter-queue branch 3 times, most recently from f389f5f to 728d706 Compare November 12, 2023 18:12
Copy link

codecov bot commented Nov 12, 2023

Codecov Report

Attention: 3 lines in your changes are missing coverage. Please review.

Comparison is base (7abb962) 90.40% compared to head (a6279d9) 90.39%.

Files Patch % Lines
exporter/exporterhelper/logs.go 66.66% 0 Missing and 1 partial ⚠️
exporter/exporterhelper/metrics.go 66.66% 0 Missing and 1 partial ⚠️
exporter/exporterhelper/traces.go 66.66% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8853      +/-   ##
==========================================
- Coverage   90.40%   90.39%   -0.01%     
==========================================
  Files         344      346       +2     
  Lines       18058    18103      +45     
==========================================
+ Hits        16325    16365      +40     
- Misses       1401     1405       +4     
- Partials      332      333       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@dmitryax dmitryax marked this pull request as draft November 19, 2023 18:34
@dmitryax dmitryax force-pushed the request-exporter-queue branch 2 times, most recently from 0399b37 to 82696bf Compare November 19, 2023 18:40
@dmitryax dmitryax force-pushed the request-exporter-queue branch 7 times, most recently from 8657a33 to 6b26a1d Compare December 1, 2023 01:07
@dmitryax dmitryax force-pushed the request-exporter-queue branch 3 times, most recently from eaad286 to f2d5aee Compare December 10, 2023 00:09
@dmitryax dmitryax marked this pull request as ready for review December 10, 2023 00:09
@dmitryax dmitryax force-pushed the request-exporter-queue branch 5 times, most recently from ebc3091 to 38b688b Compare December 19, 2023 19:03
@dmitryax dmitryax force-pushed the request-exporter-queue branch 5 times, most recently from ed9dcb4 to 5cdf3c4 Compare December 27, 2023 06:20
// NewPersistentQueueFactory.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
type PersistentQueueConfig struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be interested to see how people decide to use this vs QueueConfig in their Configs. When do I pick one or the other?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QueueConfig is used only for exporters that don't support persistent queue. And PersistentQueueConfig for exporters that support both. Maybe we don't need to support memory-queue-only exporters. My idea was that some exporters don't want to provide serializers but still want to use custom requests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My idea was that some exporters don't want to provide serializers but still want to use custom requests.

We can have some default serializers using pdata maybe in that case? May complicate too much the implementation design...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would require asking user for Request->pdata translator. Right now we ask for pdata->Request only

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, we can apply the conversion after the queue. This can significantly complicate the implementation, as you mentioned.

config/configqueue/queue.go Outdated Show resolved Hide resolved
consumer/go.mod Outdated Show resolved Hide resolved
exporter/debugexporter/go.mod Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/bounded_memory_queue.go Outdated Show resolved Hide resolved
exporter/exporterhelper/internal/persistent_queue.go Outdated Show resolved Hide resolved
// This option should be used with the new exporter helpers New[Traces|Metrics|Logs]RequestExporter.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
func WithRequestQueue(cfg configqueue.QueueConfig, queueFactory QueueFactory) Option {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of having a new public QueueFactory what about we change this to func WithRequestQueue(queue Queue, numConsumers int)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how to pass QueueCreateSettings then. Should we move it to the Start method?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QueueCreateSettings - What is in there that user doesn't have access to or you think is "duplicate"?

Copy link
Member Author

@dmitryax dmitryax Dec 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would need to ask the user to pass at least component.DataType and exporter.CreateSettings to the persistent queue constructor in addition to the exporter helper itself

Copy link
Contributor

This PR was marked stale due to lack of activity. It will be closed in 14 days.

exporter/exporterqueue/queue.go Outdated Show resolved Hide resolved
exporter/internal/err.go Outdated Show resolved Hide resolved
exporter/exporterqueue/queue.go Outdated Show resolved Hide resolved
exporter/exporterqueue/queue.go Outdated Show resolved Hide resolved
@dmitryax dmitryax force-pushed the request-exporter-queue branch 4 times, most recently from db8cbee to 902e47a Compare February 2, 2024 22:27
The new configuration interface for the end users provides a new `queue_size_items` option to limit the queue by a number of spans, log records, or metric data points. The previous way to limit the queue by number of requests is preserved under the same field, `queue_size,` which will later be deprecated through a longer transition process.
@bogdandrutu bogdandrutu merged commit bdbb5f3 into open-telemetry:main Feb 3, 2024
46 checks passed
@github-actions github-actions bot added this to the next release milestone Feb 3, 2024
@dmitryax dmitryax deleted the request-exporter-queue branch February 5, 2024 02:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants