Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use sync.Pool to reduce alloc #729

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ethan256
Copy link

@ethan256 ethan256 commented Jun 6, 2024

use sync.Pool to reduce alloc on conn.Receive.

Signed-off-by: yuanliang <yuanliang_zh@163.com>
@kung-foo
Copy link
Member

Hi @ethan256, do you have some real-world motivation for wanting to introduce a pool?

@ethan256
Copy link
Author

ethan256 commented Jun 19, 2024

Hi @ethan256, do you have some real-world motivation for wanting to introduce a pool?
When I pressure tested it, I found that the program has a high GC percentage, and its pprof results are as follows:
image

Modified results:
image

Signed-off-by: yuanliang <yuanliang_zh@163.com>
@magiconair
Copy link
Member

@danomagnum is this something the server could benefit from as well?

@magiconair magiconair added this to the v0.6.1 milestone Dec 3, 2024
@danomagnum
Copy link
Contributor

I would think so. I'd have to check but it may automatically benefit from this to some extent since it uses the uasc module already.

@magiconair
Copy link
Member

Could you have a look @danomagnum ? Then I'll merge it to v0.6.1

@kung-foo
Copy link
Member

kung-foo commented Dec 5, 2024

I'd like to see a bit more research into whether or not this is needed. sync.Pools have a lot of foot guns. I checked one of our busiest clients (a subscription with about 3000 notifications per second), and I see "gccpu_fraction": 0.0014583341353648874. I know this a N=1 argument, but for us, gc pressure is basically nil.

@magiconair
Copy link
Member

magiconair commented Dec 5, 2024

I remember sync.Pool having issues. Is this something we could hide behind a feature flag? Would we want to?

@magiconair
Copy link
Member

Hey @kung-foo , my keybase account got blocked when my laptop and phone got stolen. Can you send me an email to re-connect? Check your inbox. Sent you something to your github email account.

@danomagnum
Copy link
Contributor

danomagnum commented Dec 5, 2024

I took a look at it and there are a couple places it would as currently implemented impact the server. Mostly the readChunk function. I agree on needing more investigation though to make sure we don't leak any slice references that would then get put back in the pool and re-used while not actually free. The other encoding performance pull request (#726) seems like a better first approach to me and (without actually looking into it) I don't see why you couldn't combine them eventually - using a pool of streams instead of a pool of buffers.

@magiconair magiconair modified the milestones: v0.6.1, 0.7.0 Dec 9, 2024
@ethan256
Copy link
Author

sure we don't leak any slice references that would then get put back in the pool and re-used while not actually free.

I agree with this, there are indeed some non-free slice being put back into the pool. Further testing and analysis is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants