You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @yeya24! Can this issue be affecting compactor performance? I get this log message a lot and it feels like compactor is not doing it's job as I often get "bucket index is too old" errors
Hi @kryachkov, for this error specifically the answer is no. It is just annoying as it updates the failure counter even though the error is expected.
Regarding the issue you encountered, I think it is mainly relevant to your setting. Try increase your bucket index max stale time and block upload time. It could be also related to your object storage performance if you have a large amount of objects in your bucket. But I think the issue was fixed in the latest 1.17.0 release.
Describe the bug
When using S3 bucket client, there are excessive logs in Compactor saying
file is not block visit marker
during iter operation.The error is thrown at https://github.com/cortexproject/cortex/blob/master/pkg/compactor/blocks_cleaner.go#L466. When trying to clean up partial blocks, we return this error when compactor finds a non visit marker file to terminate iteration early.
However, even though the error is expected, the error still got returned from the bucket client. When using S3 client, it will be retried and eventually the error will be logged at https://github.com/cortexproject/cortex/blob/master/pkg/storage/bucket/s3/bucket_client.go#L135.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
There shouldn't be any logs for
file is not block visit marker
as it is not an error.The text was updated successfully, but these errors were encountered: