-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K3s support #12973
Comments
Issues go stale after Mark the issue as fresh with If this issue is safe to close now please do so. Moderators: Add |
FWIW I don't believe we do anything outside the v1 API anymore, since deployments moved out of v1beta2 |
@amisevsk I believe we use Ingresses which isn't a v1 API. This'll beak K3s |
@sr229 You are right, I missed that one. |
I think at this rate we can have the same chart but we have to revise the Ingress resources to use a |
It is indeed possible to setup Che on k3s. Otherwise I pretty much followed the Azure instructions here: So I deployed the ingress-nginx as described there. One problem was the cert-manager for updating the domain. I had the domain on AzureDNS, but I'm pretty sure the free option (CloudFlare DNS) would have worked just as well. When using a more recent version of cert-manager (0.14.2 using this url with kubectl: https://github.com/jetstack/cert-manager/releases/download/v0.14.2/cert-manager.yaml) some of the yml changed. Here are the examples I used:
and for the cert:
Also for chectl to be able to deploy, I had to seperate the ca.crt from the tls.crt which seems to be a known bug:
I will write together a more complete howto next weekend. I hope for now this helps anyone searching Che and k3s and finding this. |
Thanks for trying this out @petzsch, it's great you were able to get up and running! If you have time, it would be great to contribute something to the Che documentation. I haven't played with k3s so I'm kind of useless here. |
I can review the document when it gets PRed. We would need a K3s guide. |
Still waiting for my employer to provide me with 2 new VPS for setting Che+k3s up again on a clean install. While doing so, I'll document my steps taken. And create a PR. At the moment I'm struggeling to get the che-doc repository loaded in Che. Maybe you've seen the issue that I created for that? jekyll starts when the container starts, binding to the default port and serving from the wrong working dir. I believe that to be an issue with the Dockerfile, but maybe I am doing it wrong. |
i also use k3s for must of my deployments, because it's very user friendly and resource saving. one of the nice features of k3s may be seen in the fact, that you can easily prepare a setup by some static configuration files in a i personally would prefer this kind of static setup instead of manual invocation of as already mentioned by others, i also do not use the default if you want see an example setup for this kind of deployment, take a look at: https://gitlab.com/mur-at-public/kube , although it's unfortunately documented in german language. i'll take a look, if i can deploy che in this kind of environment as well. |
Note that theoretically, we could be able to support If we can come up with a set of ingress annotations for |
yes, that's possible, but you'll loose a lot of useful capabilities in this case (e.g. all the middleware stuff, raw TCP/UDP port forwarding etc.). there are good reasons, why most of the more powerful kubernetes ingress solutions in the meanwhile have changed their behavior in similar manner as istio to provide additional features. btw: another import feature of k3s, which i forgot to mention in my previous post, is the local storage provider, which could be very useful in minimalist che setups. right now i'm still fighting with the fact, that all present documentation of che doesn't describe a simple setup with native kubernetes manifests or helm charts anymore. does anybody know, if this tool is at least able to output it's commands as manifest sequences on stdout? that's the compromise, which was chosen e.g. in the linkerd2 config tool and similar software as a workaround to support both kinds of utilization. i couldn't find such an option in the |
I might have the wrong perspective about the local path storage provider: But it gave me quite a bit of headache with it's placement constraints (I think that's what they where called) when I tried to scale down my cluster. I had volumes attached to all of my nodes and couldn't find a way to migrate them to other worker nodes. That's why I wanted to have a look at longhorn and give that a try. Personally chectl hasn't been an issue with k3s. Though I would also prefer a more transparent workflow like with helm charts. Assuming there were reasons for this decision. |
for real multinode clusters you will usually choose and setup the most optimal solution with care (ceph etc.), but k3s is in most cases used in rather simple usage scenarios, just like minikube. i only mentioned this detail, because even in this kind of minimalist utilization it will make an observable performance difference (e.g. on compiling code), on which type of persistent volume you proceed your jobs. that's why it's often useful to utilize this implementation specific capabilities, if they are available even on this very simple setups.
yes -- this may be the case, nevertheless i personally also esteem transparency and comprehensible control. |
Some updates, I think we can use the Traefik ingress which is in the |
Otherwise I think Che should be ready for primetime for this, will give it a check later. |
Issues go stale after Mark the issue as fresh with If this issue is safe to close now please do so. Moderators: Add |
Issues go stale after Mark the issue as fresh with If this issue is safe to close now please do so. Moderators: Add |
/remove-lifecycle stale |
Issues go stale after Mark the issue as fresh with If this issue is safe to close now please do so. Moderators: Add |
Did anything come of this? I don't see K3s documentation here. |
I am interested in this as well but I do not see a PR connected to it closing so I think it auto closed due to no movement. |
/remove-lifecycle stale |
I'm not sure the status of this issue; I personally don't have time to set up a k3s cluster and start testing. A related issue that may be of interest is devfile/devworkspace-operator#1068, which would be a component of enabling support for traefik as the ingress controller. |
I'm targeting such use case. One workaround is to declare a Then, we faced to a concurrent access to the Ingress resource: traefik tries to update the Ingress resource and DWO reconcile to its version. I'm not sure, but I think it is related to the So, I propose this PR: devfile/devworkspace-operator#1143 |
I made some tests and I can state that the devfile/devworkspace-operator#1143 fixes the ability for traefik to handle the Kind=Ingress generated by DWO. |
|
I think a simpler solution can be to use k3d (https://k3d.io/) in order to deploy a K3S inside docker, without any impact on your host. |
Half the battle is setting up the cluster, the other half is finding time to run and test DWO on it :) It's still in my TODO list, but in the mean time reporting any issues or submitting PRs is more than welcome |
Description
K3s is a Kubernetes distrbution intended for IoT, edge and those who wants a minimal Kubernetes experience (ala-HyperKube without the pain).
However, K3s removes the following:
With this in mind, we need to have a seperate manifest for deploying in K3s as there are many resources we use that are not in the v1 API stability.
The text was updated successfully, but these errors were encountered: