-
Notifications
You must be signed in to change notification settings - Fork 724
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pd not applying placement to partitions of new table #4467
Comments
After some researching, I found that the root cause may be That means PD will meet the isolation level requirements by scheduling peers from stores with different "region label". So while the syntax is completely legal and possible, it is just won't schedule 3 stores with same region label(because of isolation, it is seen as a "bad schedule"). @rleungx Could you confirm the behavior? If that is true, then it is expected for PD. It is more of an issue of TiDB behaviors. |
Yes, once you set the isolation level, PD will try to make the distribution of regions meet the requirement of isolation. |
So isolation level takes a higher priority than the requirements of constraints. @morgo What do you think about that? We could not give up setting isolation level from tidb, then there is no such problem. While it may not satisfy the requirements of isolation, PD should score labels by |
@xhebox Sounds good. But lets break this into two requests:
I can write docs for (1). I think that it will be a common occurence of misconfiguration. |
I manually edited the placement rules using pd-ctl to remove "isolation_level" from the rules generated by TiDB, and PD quickly applied the expected scheduling:
|
@xhebox I guess that this can be closed by pingcap/tidb#30859, right? |
Yes, I've created pingcap/tidb#30960 to track another proposal from morgo. This issue could be closed, now. |
Bug Report
What did you do?
I deployed a cluster of 9 TiKV stores in 3 geographical regions and gave each store a "region" label to match the cloud region it's deployed to.
I created 3 placement policies, one for each of the regions where TiKV stores are deployed.
I created a table with 3 partitions, each of which uses a different placement policy.
What did you expect to see?
After creating the new table, I expected all replicas for each of the partitions to be placed in the region designated in the associated placement policy.
What did you see instead?
PD does not seem to follow the placement policy.
pd.log
What version of PD are you using (
pd-server -V
)?The text was updated successfully, but these errors were encountered: