-
Notifications
You must be signed in to change notification settings - Fork 295
Potential Issue with Newer Instance Types and etcd #1230
Comments
Seems like the same for me too, with
|
I slightly remember that someone told me that device names are different in newer instance types with its EBS volumes exposed as NVMe block devices:
|
Found it! 😉 #1048 Would you mind implementing the workaround shared in the upstream coreos issue into kube-aws? You can also add the workaround into |
We just used a different instance type for the time being. Should I close this or leave it open? |
@kylegoch Thanks for the confirmation 👍 |
Just found this issue and wanted to chime in - I spun up a cluster with It seems this is fixed now? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I was standing up a production cluster and wanted to use an
m5.large
instance type for my etcd nodes. However they never stood up. I went back to using at2
instance type and they stood up and I could carry on setting up everything.Here are the logs from the failed etcd node:
Digging around i traced it to the
var-lib-etcd2.mount
that is a requirement of theetcd-member.service
. It was returning these errors:Not sure if its something with the way the drives are setup or what, but figured I would pass along.
The text was updated successfully, but these errors were encountered: