-
-
Notifications
You must be signed in to change notification settings - Fork 375
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Longhorn RWX volumes are not attached #1016
Comments
@janosmiko You might want to SSH into a node and see the logs. Please refer to the Debug section in the readme. @aleksasiriski Maybe you would know something about that specific issue? |
Hi @mysticaltech , I found those logs in the node's journalctl (where the pod that needs the RWX volume). And actually when all the rest is working well (eg: a POD with RWO volume works as expected). |
@janosmiko Please have a look if the nfs packages are installed. If not, make sure you are using the latest version of the nodes. See the packer file how nfs is installed and do the same manually, if that solves it, we would have identified the problem. |
Sure, it's installed:
|
Maybe this one is related? |
I just did a rollback to the previous version of MicroOS (it did an auto-upgrade midnight) and now the issue is solved on that node. They definitely updated For anyone who faces the same issue:
If you want to disable system upgrade manually:
@mysticaltech I think there's also a bug in the terraform module.
And none of them seems to work. |
Same here. Disaster :-/ |
@janosmiko The upgrade flags are not retroactive, they take effect on the first deployment only. But see the upgrade section in the readme, you can disable it manually. About the nfs-client, just freeze the version with zypper (via transactional-update shell). After the version are frozen, you can let the upgrade be. |
Can these be applied on the autoscaled nodes too? Freezing the nfs-client can be a good solution for the already existing nodes, but autoscaled (newly created) nodes will be created using the new package version. :/ |
I reported it here: |
@janosmiko Yes you can ssh into autoscaled nodes too. And what you could do is freeze the version at the packer level and publish the new snapshot (just apply packer again, see readme). |
@janosmiko @Robert-turbo If you folks can give me the working version of the nfs-client I will freeze it at the packer level. These kind of packages do not need to get updated often. (then you can just recreate the packer image again, I will tell you how, just one command, so that all new nodes get a working version). |
Working version:
Problematic version:
|
Should be fixed in v2.8.0 but the image update is needed, please follow the steps laid out in #794. |
(The solution was to install and freeze an older version of nfs-client), see the solution linked above for manual fixes). |
Hi @mysticaltech , And it still doesn't work. I tested it even by manually pinning the nfs-client version only on all my nodes and the RWX longhorn volumes are still not mounted. https://bugzilla.suse.com/show_bug.cgi?id=1216201#c2 Also, installing x86-64 package on the arm snapshots will not work. |
@mysticaltech See the progress in the related bugreport: longhorn/longhorn#6857 |
Thanks for the details @janosmiko, you are right. Will revert the change to pin the version and wait for more feedback on this issue. |
The changes to the base images pinning nfs-client were reverted in v2.8.1. |
@janosmiko As this is a longhorn bug, there is nothing else we can do here, closing for now. Thanks again for all the research and the info. |
It's not a Longhorn bug, but actually a bug in the Linux kernel. For anyone who faces the same issue and wants a real (and tested) solution... SSH to all your worker nodes and run these commands:
If you'd like to make sure the autoscaled nodes also have this pinned kernel, delete the previous snapshots from hcloud, then modify the packer config with these:
and rerun Wait for the images to be built and finally run |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Description
Hi,
I'm using multiple clusters using this solution. Today, suddenly all the Longhorn RWX mounts stopped working in all of my clusters.
Previously I used Longhorn 1.5.1, now I rolled back to 1.4.3 but the same.
This is all I found in the logs:
I'm using a self installed longhorn, so it's disabled in the kube.tf, but this is the values.yaml I'm using. This also worked yesterday, so I'd say it's not related to the issue.
Do you have any ideas or advices on how to further debug it?
Kube.tf file
Screenshots
No response
Platform
Linux
The text was updated successfully, but these errors were encountered: