-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Sharing without restrictions #66
Comments
This is also really useful when using the GPU for light tasks like transcoding etc. This is a feature that the Intel GPU Device Plugin has had for a really long time with the |
Is there any other way to do this? I'm using |
Nvidia has GPU sharing documented in their device plugin https://github.com/NVIDIA/k8s-device-plugin?tab=readme-ov-file#shared-access-to-gpus but really wanna stay as close to team red as possible. Currently having to run things on the node as is without containers. |
Would also love to see this implemented! |
Suggestion Description
I've got a w6800 that is awesome, but I'm left with scheduling it against either my Photoprism server for encoding acceleration, LocalAI for AI resources, or ffmpeg jobs for one time encoding of raw footage. A GPU as big as this can be shared. Of course, if not managed properly from my end apps can crash, but 32G is a LOT of play room for one container.
I want the ability to assign more than one workload to this GPU. Bonus points if there a way to do memory management but not required at all.
Operating System
Arch Linux
GPU
W6800
ROCm Component
No response
The text was updated successfully, but these errors were encountered: