Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

preallocate message slice in consumer.go and random fixes #1298

Merged
merged 3 commits into from
Mar 5, 2019

Conversation

varun06
Copy link
Contributor

@varun06 varun06 commented Mar 1, 2019

  • Preallocate message slice as it is more efficient
  • Apparently in Go, if pointers are in front of struct, it reduces pressure on GC, so I have made that change in consumer file, if everything works fine, I will do that for other types too.

From performance slack channel ->

So the reason that works, as I understand it, is that for any given block of data, there's a series of bits describing its pointerness. Specifically, two bits per 64-bit word; one indicates whether this word is a pointer, one indicates whether any bit after it is a pointer.
So the GC scans through from the beginning of the object until there's no more pointers, and checks all the things marked as pointers.
if you have pointers at the end of an object, the gc has to scan through the entire object looking for them. if they're all at the beginning, it stops after the last pointer.

GC might change in future, but it is a harmless change and if it gives us some better numbers 🤷‍♂️

@varun06 varun06 changed the title preallocate message slice and random fixes preallocate message slice in consumer.go and random fixes Mar 1, 2019
@bai
Copy link
Contributor

bai commented Mar 1, 2019

Oops, looks like it needs a small rebase after #1297.

@bai
Copy link
Contributor

bai commented Mar 5, 2019

Neat, thanks! 💯

@bai bai merged commit 61b76ee into IBM:master Mar 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants