Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

store: add streamed postings reading #6340

Closed

Conversation

GiedriusS
Copy link
Member

  • I added CHANGELOG entry for this change.
  • Change is not relevant to the end user.

Changes

readIndexRange dominates the profiles here so let's stream reading postings into index.Postings instead of allocating everything at once.

Work in progress.

Verification

Existing + ad-hoc tests.

@GiedriusS GiedriusS force-pushed the add_streamed_postings_reader branch 9 times, most recently from fb0895f to 1ee9043 Compare May 10, 2023 12:02
`readIndexRange` dominates the profiles here so let's stream reading
postings into `index.Postings` instead of allocating everything at once.

Work in progress.

Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
@GiedriusS GiedriusS force-pushed the add_streamed_postings_reader branch from 1ee9043 to 2d10d0f Compare May 10, 2023 13:28
@yeya24 yeya24 marked this pull request as ready for review June 1, 2023 22:23
@yeya24 yeya24 marked this pull request as draft June 2, 2023 06:24
@yeya24
Copy link
Contributor

yeya24 commented Jun 2, 2023

Hi @GiedriusS, we are interested in this feature as we saw same high heap usage issue recently of this part of code.
Do we still plan to work on this or we want to approach this problem in a different way? We are happy to collaborate if you are willing to

@@ -2416,142 +2415,42 @@ func (r *bucketIndexReader) fetchPostings(ctx context.Context, keys []labels.Lab

// Fetch from object storage concurrently and update stats and posting list.
g.Go(func() error {
begin := time.Now()
for _, p := range ptrs[i:j] {
ir, err := r.block.bkt.GetRange(ctx, r.block.indexFilename(), p.ptr.Start, p.ptr.End-p.ptr.Start)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That means we have to send multiple requests to objstore while current logic is sending once per part?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we able to still send 1 request per part and create posting reader from the get range reader?
If this is not doable, I feel it is better to maybe just download postings to disk

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe more get range requests won't impact performance, hopefully we can have some datapoints to understand the impact.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could add this under a feature flag? if it is enabled then we would send multiple requests which would mean bigger costs if using some SaaS that charges per-request but we would get constant RAM usage

statsMtx: statsMtx,
}

postingsCount, err := getInt32(bktReader, r.readBuf[:0])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be r.readBuf, not r.readBuf[:0]?
Tested r.readBuf[:0] cannot read data cause the buffer size is 0.

}

func getInt32(r io.Reader, buf []byte) (uint32, error) {
read, err := r.Read(buf[:0])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here buf[:0] needs to be buf

@GiedriusS
Copy link
Member Author

@yeya24 #6442 here's a similar optimization that is based on this code and our discussions on Slack that is actually feasible. #6442 is unfinished but WDYT about such optimization?

@yeya24
Copy link
Contributor

yeya24 commented Jun 13, 2023

@GiedriusS IIUC #6442 will supercede this pr? Or will we do both?

@fpetkovski
Copy link
Contributor

fpetkovski commented Jun 13, 2023

What about the roaring bitmap approach? I feel like that one is superior since we can aggregate postings in a streaming manner. I don't know if making one or more requests per posting is sustainable at scale.

@yeya24 also had an idea to calculate the intersection by merging one (or a controlled number) of postings at a time, instead of maxing out the fanout.

@yeya24
Copy link
Contributor

yeya24 commented Jun 13, 2023

also had an idea to calculate the intersection by merging one (or a controlled number) of postings at a time, instead of maxing out the fanout.

Yeah I am thinking about the same. We know the postings length so we can just sort and start from the two smallest size of postings. Need some benchmarks though.

I don't know if making one or more requests per posting is sustainable at scale.

#6442 seems not doing this anymore. It is still one request per part.

@GiedriusS GiedriusS closed this Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants