You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encapsulated a vllm image, and when enabled locally, everything is normal memory occupies as expected in the figure, but when I deployed the image on a Driver Version: 525 cuda 12.0 server, the memory occupancy was three GB larger than expected, and the GPUS of both servers were exactly the same.
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Your current environment
cuda out of memory
🐛 Describe the bug
I encapsulated a vllm image, and when enabled locally, everything is normal memory occupies as expected in the figure, but when I deployed the image on a Driver Version: 525 cuda 12.0 server, the memory occupancy was three GB larger than expected, and the GPUS of both servers were exactly the same.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: