Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setting -distributor.ingestion-tenant-shard-size to 0 throws error #5189

Closed
sanyamjain22 opened this issue Mar 1, 2023 · 2 comments · Fixed by #5759
Closed

setting -distributor.ingestion-tenant-shard-size to 0 throws error #5189

sanyamjain22 opened this issue Mar 1, 2023 · 2 comments · Fixed by #5759
Labels
good first issue These are great first issues. If you are looking for a place to start, start here! help wanted type/bug

Comments

@sanyamjain22
Copy link

Describe the bug
setting -distributor.ingestion-tenant-shard-size to 0 throws error . According to the documentation setting this flag to 0 should disable the shuffle sharding. However it throws validation error that value can not be 0.

To Reproduce

  1. Set -distributor.ingestion-tenant-shard-size to 0
  2. Deploy distributors

Expected behavior
Documentation should be clear.

Environment:

  • Infrastructure: [e.g., Kubernetes, bare-metal, laptop]
  • Deployment tool: [e.g., helm, jsonnet]

Additional Context

@friedrichg
Copy link
Member

I am familiar with this. I saw it when I started using shuffle sharding for ingesters a long time ago. Ended up setting a bigger than 15 value for all users while configuring bigger shard size for larger users using overrides.

Fixing the docs seems like the easiest to do here. But I am wondering if we should instead fix the limit to do what the docs say. "0 should make all tenants use no sharding by default".

@jeromeinsf jeromeinsf added the good first issue These are great first issues. If you are looking for a place to start, start here! label Mar 29, 2023
@dogukanteber
Copy link
Contributor

I think that is better @friedrichg. I would like to fix this problem if we agree on what you propose.

I reproduced the error in my local system but before working on the issue, I would like to know whether I should fix the docs or fix the code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue These are great first issues. If you are looking for a place to start, start here! help wanted type/bug
Projects
None yet
5 participants