-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Worker hw transcode #81
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!!
Did you also have to enumerate devices explicitly in docker-compose, or run in privileged mode?
There may be some nice info around that to add to the README if others may find it useful.
The device enumeration is the same as with normal plex, except on the transcoder only, no privileged mode required. Basically the steps from the official plex docs, but applied to the worker node: https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/. These changes are what is required for cuda/nvidia:
|
Nice! Also AFAIK worker nodes don't need the shared config from Plex. But maybe when doing hardware transcoding it might be different. |
It's running standalone, I don't think you can pass devices through to more than one container at a time so I'm fairly certain swarm won't work. As for config that's a holdover from trying to get EAE working, but in the end I decided to add the local decode flag :) |
fyi
along with the instructions from https://gist.github.com/coltonbh/374c415517dbeb4a6aa92f462b9eb287 Have allowed me to schedule a plex-worker using the nvidia gpu on an available node in my swarm. currently only have a single node with a discrete GPU but for those without I just run a non-hw accel version of the same container in the stack. as long as you define your generic resources on each node in the swarm and dont try to scale above whats available, each task in the service will take its own gpu and run with it. At this point I'm starting to explore allowing more than one task or container to use the same GPU so i dont leave any performance on the table. If I figure that out, Ill update here. |
This PR allows hardware acceleration to be hinted on worker nodes.
A new optional env variable
FFMPEG_HWACCEL
is added to the worker docker compose, this is then added to the payload prior to transcoding. This allows different workers to either use hardware acceleration, or not, or different drivers, depending on their hardware configuration.Worker logs:
nvidia-smi output: