-
Notifications
You must be signed in to change notification settings - Fork 637
两张4090卡上部署rerank模型,无法并行计算 #3222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
replica(副本) 配置成2. |
@qinxuye reranker.xinference_rerank:_rerank_batch:77 - rerank response text: {"detail":"Model actor is out of memory, model id: bge-reranker-v2-m3-1, error: CUDA out of memory. Tried to allocate 2.00 GiB. GPU 0 has a total capacity of 23.55 GiB of which 848.44 MiB is free. Process 159 has 0 bytes memory in use. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 19.13 GiB is allocated by PyTorch, and 525.93 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"} 怎么配置xinference以避免出现oom |
你是两张卡吗?配置 replica 2 就行,不需要配置 gpu idx。 |
不止两张 |
目前多个 replica 还不能指定 gpu idx。 要限制 XInf 的使用可以启动的时候指定 CUDA_VISIBLE_DEVICES 。 |
是不是可以开多个docker xinf服务实例,每个实例分配一个device idx🤣 |
后续要让 worker_ip 和 gpu_idx 支持多副本/分布式推理。还在设计。 |
我是两张卡,怎么部署多个replica >2 模型在一张卡上? |
asyncio并发跑600份文档rerank,还是单卡100%负载,另一张卡0%,请问是否需要特别的配置?
The text was updated successfully, but these errors were encountered: