-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gRPC initial window is too large that may cause OOM #2673
Comments
This issue relates to https://github.com/pingcap/ticdc/issues/2553 |
Data accumulated in kv client, it may be caused by low throughput of gRPC message processing. Is there any CPU profile dump during the test. |
Yes, data accumulation is caused by unbalanced producing and consuming speed, in this case, we have a slow sorter (i/o bottleneck), kv client is much faster than the sorter. Also, after changing initial window size to 64KB and initial conn window size to 8MB, the OOM disappeared. - grpcInitialWindowSize = 1 << 26 // 64 MB The value for initial window size on a stream
- grpcInitialConnWindowSize = 1 << 27 // 128 MB The value for initial window size on a connection
+ grpcInitialWindowSize = 65535 // 64 KB The value for initial window size on a stream
+ grpcInitialConnWindowSize = 1 << 23 // 8 MB The value for initial window size on a connection |
|
Is there performance issue if we lower those values? |
Bug Report
Please answer these questions before submitting your issue. Thanks!
Capture 10k tables with ~90G incremental data using single CDC node.
heap profile: profile.pb.gz
No OOM.
gRPC client consumes too many memory and cause OOM.
v5.1.0
The text was updated successfully, but these errors were encountered: