Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compactor: Clean up file is not block visit marker log when using S3 bucket #5790

Open
yeya24 opened this issue Feb 27, 2024 · 4 comments
Open

Comments

@yeya24
Copy link
Contributor

yeya24 commented Feb 27, 2024

Describe the bug
When using S3 bucket client, there are excessive logs in Compactor saying file is not block visit marker during iter operation.

err="file is not block visit marker" operation="Iter

The error is thrown at https://github.com/cortexproject/cortex/blob/master/pkg/compactor/blocks_cleaner.go#L466. When trying to clean up partial blocks, we return this error when compactor finds a non visit marker file to terminate iteration early.

However, even though the error is expected, the error still got returned from the bucket client. When using S3 client, it will be retried and eventually the error will be logged at https://github.com/cortexproject/cortex/blob/master/pkg/storage/bucket/s3/bucket_client.go#L135.

To Reproduce
Steps to reproduce the behavior:

  1. Start Cortex 1.16.0 Compactor

Expected behavior
There shouldn't be any logs for file is not block visit marker as it is not an error.

@yeya24
Copy link
Contributor Author

yeya24 commented Feb 27, 2024

I opened thanos-io/objstore#103 to fix this issue

@kryachkov
Copy link

Hi @yeya24! Can this issue be affecting compactor performance? I get this log message a lot and it feels like compactor is not doing it's job as I often get "bucket index is too old" errors

@yeya24
Copy link
Contributor Author

yeya24 commented Jun 24, 2024

Hi @kryachkov, for this error specifically the answer is no. It is just annoying as it updates the failure counter even though the error is expected.

Regarding the issue you encountered, I think it is mainly relevant to your setting. Try increase your bucket index max stale time and block upload time. It could be also related to your object storage performance if you have a large amount of objects in your bucket. But I think the issue was fixed in the latest 1.17.0 release.

@friedrichg
Copy link
Member

Still happening in v1.18.1

ts=2024-12-10T10:09:31.527783067Z caller=bucket_client.go:141 level=error msg="bucket operation fail after retries" err="file is not block visit marker" operation="Iter tenant-1/01JEMEWYHEEY2KEGSJ0Z1H5NHE"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants