Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd cluster has different backend datastore sizes. #12218

Closed
sridhav opened this issue Aug 14, 2020 · 5 comments
Closed

etcd cluster has different backend datastore sizes. #12218

sridhav opened this issue Aug 14, 2020 · 5 comments
Labels

Comments

@sridhav
Copy link

sridhav commented Aug 14, 2020

What happened:
One of our etcd cluster has this strange issue where we have observed our etcd datastore sizes (using endpoint status), for some reason we have 2 nodes with 381MB etcd data store size and on other 3 nodes we observed the data store size to be around 5GB.

What you expected to happen:
All the members should have same datastore

How to reproduce it (as minimally and precisely as possible):
This is an issue we observed only on one particular cluster

Anything else we need to know?:
Validations performed:

  1. we checked that all the etcd members reported that they belong to same cluster id.
  2. We tried compaction and defragmentation commands multiple times but the DB size goes down only on 2 hosts (as mentioned earlier in the post) and the other 3 nodes continue to show 5GB db size and the alarm persisted.

Environment:

  • Kubernetes version (use kubectl version): v1.10.8
  • etcd version: v3.4.2
  • Cloud provider or hardware configuration: Bare Metal
  • OS (e.g: cat /etc/os-release): centos 7
  • Kernel (e.g. uname -a): 3.10.0
  • Install tools:
  • Network plugin and version (if this is a network-related bug): i
  • Others:
@jingyih
Copy link
Contributor

jingyih commented Aug 16, 2020

Can you query the /metrics of each etcd server, and look for etcd_mvcc_db_total_size_in_use_in_bytes? Do they roughly match?

@sridhav
Copy link
Author

sridhav commented Aug 18, 2020

@jingyih we have been able to fix this issue by removing the member that has increased data store size and re-adding it to the cluster. Once we did that two members with 5GB data store size the cluster started to behave normally.

We just want to understand what has caused this issue. before this issue started, we upgraded our cluster from 2.3.7 to 3.4.2 (rolling upgrade) and also migrated the datastore from etcd2 to etcd3. After the upgrade we started to see this issue

Things to note:

  • We did the similar upgrade plan on few other clusters(staging, testing and few prod), we havent seen this issue and this happened only on one cluster. We are just trying to understand the reason behind this
  • We tried defragmenting the cluster and it has failed with this issue etcd defragmentation doesnt work as expected #12219

@tangcong
Copy link
Contributor

how do you upgrade your cluster? do you enable auth? is your cluster consistent all the time? see issue #11689,#11651

@sridhav
Copy link
Author

sridhav commented Aug 24, 2020

@tangcong We upgraded our cluster manually in a rolling fashion. For datastore migration, we stopped etcd service on all hosts and then migrated from v2 to v3.
We have mTLS enabled for security but we dont have any RBAC enabled with etcd though.

Based on the docs here looks like we have auth enabled automatically as we have --client-cert-auth=true
https://etcd.io/docs/v3.3.12/op-guide/authentication/

@stale
Copy link

stale bot commented Nov 23, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 21 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Nov 23, 2020
@stale stale bot closed this as completed Dec 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

3 participants