-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proto.Size
takes up to 30% memory usage in excesively large range requests
#12835
Comments
|
BTW: Please verify what happens if you call The implementation seems to not require 'marshalling' to compute the size: etcd/api/etcdserverpb/etcdserver.pb.go Line 337 in 7168409
|
Will come back to the thread after getting the results. |
Could be related #12842, in general for etcd memory usage. |
Here is the before and after comparision regarding We only set up a single etcd node cluster with 8 RAM, 2vCPU with 2
|
However, I was not turning off the ref: |
But I think we are sufficiently confident that proto.Size() -> rangeResponse.Size() [and similar] is a good move to justify a PR. |
* etcdserver/util.go: reduce memory when logging range requests Fixes #12835 * Update CHANGELOG-3.5.md
It can be reproduced by listing 3000 kubernetes pods across all namespaces with 50 concurrency which takes up to 70% of 8GiB RAM
The following is the heap_alloc profiling data
Looks like
proto.Size
takes up to 30% memory usage in excesively large range requestsPersonally I think it may reduce some cost on RAM and reduce the possibility of etcd out of memory.
After making a custom patch to remove the
proto.Size
call in warn logging.We do see overall 7% memory usage drop, why it's not 30% theoretically is the
UnMarshal
in MVCC layer claims more memory than last time.Can we get some insights from grpc/etcd experts to explain this behavior and what's the next step? @gyuho
The text was updated successfully, but these errors were encountered: