-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Native Kubernetes Integration #82
Comments
This is probably out of scope for Wolf, it feels to me that this should be another software that leverage Wolf inside. Wolf currently allows multiple users to share a single host, it automatically provisions virtual desktops, devices and routes the events from one client to the right Docker container, to simplify: flowchart TD
subgraph Wolf
M("Moonlight server")
uinput
GPU
Docker
end
U1("User 1") --> M
U2("User 2") --> M
Docker --> C1("Container 1")
Docker --> C2("Container 2")
With k8s you could scale that further so that multiple clients can be mapped to multiple machines: flowchart TD
subgraph K8S
M("Moonlight server")
subgraph Pod1
subgraph Wolf1
uinput1
GPU1
Docker1
C1("Container 1")
end
end
subgraph Pod2
subgraph Wolf2
uinput2
GPU2
Docker2
C2("Container 2")
end
end
end
U1("User 1") --> M
U2("User 2") --> M
M --> Wolf1
M --> Wolf2
I hope this makes sense, I don't exclude we could work on this in future once Wolf is probably more mature and stable. Controlling multiple machines is definitely outside the scope of Wolf and will be better positioned in a separate program, but, there's quite a lot of logic and stuff that could be re-used IMHO. |
I think the main thing about a kubernetes integration would be having Wolf talk to the kubernetes API to create pods, rather than having it use a passed-through docker socket. Would that be feasible at all, in terms of functionality & code structure? Edit: Looking at docker.hpp, that seems to me like it's not all too far off fitting the Kubernetes API as well. I have no insight into whether all the other parts of wolf could work with Kubernetes though, in terms of passing through devices and communicating between containers and such. |
Yep that could easily be a different Runner the problem is more about the locality of things
That's exactly my point and why I've included That's why I think the proper solution would be to decouple the Moonlight protocol from the rest, this way you'll have
Wolf can "only" do the last step, but you need something else that coordinates the multiple Wolf instances. |
Is it 'just' unix sockets that need to be shared, or is there more to it? I think it would be possible to have separate kubernetes Pods for the different bits with a socket shared between them, with the caveat that it's a bit unusual and would only work as long as the Pods are still on the same Node (but that's honestly fine). Wolf takes a docker socket and spins stuff up on the fly - is that required, or is it possible to have all the containers started ahead of time instead? If so, having one Pod with all of the Wolf bits inside of it would also be a reasonable solution that should work on Kubernetes without (many) changes today. Something relevant here that isn't yet entirely clear to me - in the existing Docker setup, does a single Wolf instance support multiple clients running different things at the same time, or is it moonlight sending clients to separate Wolf instances? That would influence what the "best practice" way to fit this into Kubernetes is. |
It's not just sockets, we are talking to and sharing virtual devices under
It is required because you don't know ahead of time what is going to be the client settings (resolution, FPS, how many and which devices to provision, think of joypads for example) and most importantly how many and which apps do you want to start.
There's a single Wolf instance that will manage and control multiple containers. You can read more about how it works from a high level POW in the docs here |
This is absolutely fair. I want to clarify that running in Kubernetes doesn't mean you have to split across multiple machines. You can pin things to one node, and you still get the other benefits of Kubernetes when doing that. My suggestion for a first pass at this request would be adding a Kubernetes backend that spins up a Pod for each container that's needed, and uses pod affinity to ensure they all land on the same node, then just mounts host paths in the same way that the current Docker setup does. I expect that should Just Work without needing changes in other parts of Wolf (fingers crossed). |
That doesn't sound too bad, and it'll probably fit well as a different Runner implementation. Sounds like this would give me a nice excuse to finally tip my toes into k8s.. 😅 |
I didn't expect you'd be interested in implementing it yourself, but that would be pretty awesome :D Unfortunately I can't really write the code myself (or I'd already be doing so, lol) but if you need any help around Kube at all don't hesitate to ask! Thank you for the great discussion btw :) |
Thank you for sticking around and bring up some good points! There are a few issues that I would like to tackle first, I'm not sure when I'll have time to look into this; I'll keep this open if anyone else wants to give it a shot. |
This comment has been minimized.
This comment has been minimized.
So does Kasm and others, how's this relevant to the discussion? |
It could be a helpful reference for you. The Selkies Desktop Containers work immediately with any Kubernetes environment, and we did this before at https://github.com/selkies-project/selkies-vdi / https://github.com/selkies-project/selkies-operator. I'm looking for more alternative implementations based on hardware acceleration, not less. I'm not trying to take users away, since as you said, we have different goals. I'm trying to help. |
I wouldn't mind taking a whack at this, although I'm not too experienced at C++. I'm happy to see official development documentation, I'll go ahead and fork this project and see what I can start cooking. I agree with keeping this objectively to one node, and implementing a new runner. |
Looking into this a little bit further, there are no really good C++ libraries, however there is a C library that I can wrap. Having this information, would the project maintainers be against introducing another language (Rust?) to implement this feature? |
Never mind, I'll go ahead and write a C++ HTTP client library for CRUD on pods. It will be a fun chance to learn C++. |
Nothing against Rust, in fact we use it quite extensively for our custom Wayland compositor My only advice would be that wrapping a C library can be done safely in C++ using smart pointers instead of pulling Rust just for implementing a wrapper. I'd be more than happy to provide more guidance, at least on set things up; feel free to give me a shout on Discord! Also, if you want to dive into the code I really recommend using devcontainers; all dependencies will be already setup and you should have a working environment automagically in a few minutes (and I should add that to the docs..). |
I'll take that approach, thanks for being available, I look forward to talking with you! I will definitely use those dev containers. |
Does this issues still fall under the |
I don't use k8s in my homelab, so personally, I don't have much reason to work on this. I'll change the label accordingly 😉 |
Please consider integrating with the Kubernetes API to deploy Steam, Firefox, etc in additional pods.
The text was updated successfully, but these errors were encountered: