Skip to content

GPU is not used in LocalAI - even though it works in other local AI repos #5063

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mekler22 opened this issue Mar 24, 2025 · 0 comments
Open
Labels
bug Something isn't working unconfirmed

Comments

@mekler22
Copy link

LocalAI version: 2.26
localai/localai:latest-aio-gpu-nvidia-cuda-12

Environment, CPU architecture, OS, and Version:
Linux havenstore 4.4.302+ #72806 SMP Thu Sep 5 13:45:09 CST 2024 x86_64 GNU/Linux synology_broadwellnk_3622xs+

Describe the bug
After composing the docker for GPU usage, with the necessary image and deploy argument inside docker-compose.yml, the running LocalAI is unable to make use of my GPU, with the following error:

WARNING:
localai-api-1 | /sys/class/drm does not exist on this system (likely the host system is a
localai-api-1 | virtual machine or container with no graphics). Therefore,
localai-api-1 | GPUInfo.GraphicsCards will be an empty array.

To Reproduce
docker-compose up
using the webui, simply open chat and ask a question (for instance)

Expected behavior
the docker should be able to use the GPU (which it "sees" with nvidia-smi command)

Logs

Additional context
the resource monitor inside Synology DSM shows that the GPU is not used at all during chat sessions with LocalAI

@mekler22 mekler22 added bug Something isn't working unconfirmed labels Mar 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

1 participant