Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PROF-8967] Reduce memory footprint and allocations for profiling timeline data #293
[PROF-8967] Reduce memory footprint and allocations for profiling timeline data #293
Changes from 2 commits
9dd2749
efd13a8
3ed8bc1
774e7f7
c1a4dc6
5cc490e
d4d3407
e4fff0d
6157a90
250846a
8f2415f
04b2b16
e714297
0592285
b71a9d2
ef01dc4
8bc3820
cd5b156
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How much do we save by compressing vs just using an array of values, and indicies into there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't actually need to index into the array (which is why compressing it works).
On the "how much do we save", here's some back-of-the-napkin numbers:
Size of each observation: 4 bytes stacktrace, 4 bytes labels, 8 bytes timestamp, 8 bytes * N profile types
100 threads * 100 samples per second * 60 seconds * (4 + 4 + 8 + 8 * 4 profile types enabled for Ruby by default) = 28 800 000 bytes
For... reasons... the test app ends up recording data for 103 threads so I get
Which seems like a nice improvement as well. This data is highly compressible, (lots of small numbers, zeros, numbers next to each other, ...) so we could make a smarter uncompressed representation, but I think the compressor takes very well care of that for us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you tried compressing in a Struct of Arrays format, and compress each array of fields of the
TrimmedTimestampedObservation
struct independently?I'd think that this would give you a better compression ratio since the data would be more homogenous
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, you might get a better compression ratio using the Linked block mode since you're doing a lot of small write and thus the block size will default to the minimum (64KB)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think those are very valid options; having said that, I'm not convinced they are what we would want here:
If I'm understanding your suggestion, doing this would mean having multiple
FrameEncoder
s, with multiple underlying buffers:This would mean more allocations, especially when growing the backing vecs -- there's no longer one single vec that gets doubled, but multiple small ones. 🤔
I suspect (without having tried it, to be fair xD) this would cause more heap fragmentation.
I'm assuming that if the encoder is referencing previous blocks, it means that it's doing more work (rather than e.g. looking only at the current block).
Depending on the win, If it's a few % I'm not sure it's worth doing more work on the profiling sample write path vs the memory savings, since anyway it's not a lot of data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably better to return an error rather than panic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course! I'm guessing we should probably turn this into an
Observations::new(...)
or something like that? (This was just another ugly hack as I wanted to focus on the prototype and didn't want to take the detour)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaned all this up in the latest version!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ergonomics: you could put the data into a
repr(packed)
struct, convert that to bytes, and then read that in and out. My bias is that this would be a touch cleaner, but YMMVThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds good -- Rust was just fighting me too much that I ended up with whatever the simplest thing I could get going :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up not doing this -- I couldn't find a nice way of converting the struct to bytes that involve pulling in a serializer or basically writing the same field-by-field code, but elsewhere in the code.
I wasn't convinced it was worth the extra indirection, suggestions welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://docs.rs/bytemuck/1.14.1/bytemuck/fn.bytes_of.html does it. Your call if this is more or less elegant than doing it field by field
and then you'd convert back with https://docs.rs/bytemuck/1.14.1/bytemuck/fn.try_from_bytes.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave a stab at
bytemuck
for 5-10mins, but I wasn't quite understanding how to integrate it, the documentation is not great (and I'm being nice...), so I'll leave it as-is for now.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could declare this outside the loop and do a
.clear()
at the top of the loop so you only ever allocate one vecThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I think that ideally this would all move inside the iterator, so we don't need to have two copies of the code for going through each sample.
I tried doing that but found it too hard with my current Rust-foo level >_>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've moved this to be inside the
TimestampedObservationsIter
. AVec
still gets created every time, but with limited lifetime (just each iteration). Suggestions welcome on improvements :)