-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
etcd cluster has different backend datastore sizes. #12218
Comments
Can you query the |
@jingyih we have been able to fix this issue by removing the member that has increased data store size and re-adding it to the cluster. Once we did that two members with 5GB data store size the cluster started to behave normally. We just want to understand what has caused this issue. before this issue started, we upgraded our cluster from 2.3.7 to 3.4.2 (rolling upgrade) and also migrated the datastore from etcd2 to etcd3. After the upgrade we started to see this issue Things to note:
|
@tangcong We upgraded our cluster manually in a rolling fashion. For datastore migration, we stopped etcd service on all hosts and then migrated from v2 to v3. Based on the docs here looks like we have auth enabled automatically as we have |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions. |
What happened:
One of our etcd cluster has this strange issue where we have observed our etcd datastore sizes (using endpoint status), for some reason we have 2 nodes with 381MB etcd data store size and on other 3 nodes we observed the data store size to be around 5GB.
What you expected to happen:
All the members should have same datastore
How to reproduce it (as minimally and precisely as possible):
This is an issue we observed only on one particular cluster
Anything else we need to know?:
Validations performed:
Environment:
kubectl version
): v1.10.8cat /etc/os-release
): centos 7uname -a
): 3.10.0The text was updated successfully, but these errors were encountered: