-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excessive bucket List operations when actively querying data over long time ranges #5018
Comments
Making a note here from a separate discussion: @sandeepsukhani had a good suggestion that it's possible to do a single list operation which returns all the objects including "subdirectories". This would make it possible to have both the I think ultimately both would be great but making the actual operation really cheap as sandeep suggests is ideal I think. Another consideration with using |
Fixed by #5018 |
@slim-bean this doesn't make sense! The issue fixes itself? Should this say |
grafana/loki#5018 has now been closed
I've been storing my shell history in Loki for almost a year now, and am discovering some pain points around List operations which are exacerbated by this use case.
Currently the compactor will search through every table in storage to look for work to do, this is a list operation on all the index tables as well as a list operation for each table to see the files in it. So every 10 minutes (default compactor run time) there is a list for as many days of stored data.
Also when you query data boltdb-shipper will download the index and cache it locally for some time, while it's cached every 5 minutes the querier will "sync" this table to make sure no new files were uploaded to the object store. For loki-shell I set the TTL on this cache to > 300 days because I regularly query for long term data. Every 5 minutes, every table in the cache will have a List call made to the object store.
I think a good first step at improving this would be to not compact or 'sync' index tables older than
reject_old_samples_max_age
The text was updated successfully, but these errors were encountered: