Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server: improve ListPath's memory consumption with batched send #2712

Closed
wants to merge 1 commit into from

Conversation

themiron
Copy link
Contributor

@themiron themiron commented Sep 24, 2023

With a lot of paths (hundreds of thousands) gobgp may oom or stuck in swapping.
Allow to specify max batch size via grpc and use 16K by default without preallocation on the first run, so it should not affect perfomance/allocations with small ammount of paths and safely improve it with any number greater 16K.

Supersedes #2605 as less invasive, fixed error order, more memory friendly and production used.

@themiron
Copy link
Contributor Author

@fujita hello, kindly please take a look

@themiron themiron force-pushed the batched-send branch 2 times, most recently from 71ff366 to 116c538 Compare September 26, 2023 13:30
api/gobgp.proto Outdated
@@ -265,6 +265,8 @@ message ListPathRequest {
// enable_only_binary == true means that only nlri_binary and pattrs_binary
// will be used instead of nlri and pattrs for each Path in ListPathResponse.
bool enable_only_binary = 9;
// max ammount of paths to be allocated, default is 16K
int64 batch_size = 10;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not unsigned type like uint64?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to avoid sign bit while conversion to int on x64, at the other hand with uint64 error return will be unnecessary

@@ -42,6 +42,8 @@ import (
"github.com/osrg/gobgp/v3/pkg/packet/bgp"
)

const defaultListPathBatchSize = 16384
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 16k? By default, we should not change the behavior. u64 max?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before 21093fb default was 1 and causes quite slow perfomance. After upgrade gobgp to v3 with n*100K of paths we begin to experience ooms, so it was empirically found 16K by default has still good performance (and allocation one by one is kept for small path ammounts) and have no dramatic memory impact as u64 max which means there'll be no limit by default.
What do you think about to treat here original default as 1 and 16K as feasible (and noticeable) improvement with optional possibility to specify u64 max if one knows what he's doing and has the free memory for such kind of load?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default was 1 three years ago. We set the default to no limit for the last three years. I don't think that we need to change the default behavior.

Copy link
Contributor Author

@themiron themiron Oct 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, that you for the suggestion. Please refer change to MaxUint64 by default

@themiron themiron force-pushed the batched-send branch 3 times, most recently from 08eb5ca to 14e1000 Compare October 2, 2023 05:13
With a lot of paths (hundreds of thousands) gobgp may oom or
stuck in swapping.
Allow to specify max batch size via grpc and keep unlimited batch
size by default since 21093fb
without preallocation on the first run, so it still should not
affect perfomance/allocations with small ammount of paths.

Signed-off-by: Vladislav Grishenko <themiron@yandex-team.ru>
@fujita
Copy link
Member

fujita commented Oct 2, 2023

thanks

@fujita fujita closed this Oct 2, 2023
@themiron
Copy link
Contributor Author

themiron commented Oct 2, 2023

thank you for the merge

@themiron themiron deleted the batched-send branch October 2, 2023 06:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants