Skip to content

Commit 57ec9da

Browse files
committed
fix vllm backend
1 parent c4bf8e7 commit 57ec9da

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

gpt_server/model_backend/vllm_backend.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ async def stream_chat(self, params: Dict[str, Any]) -> AsyncGenerator:
104104
else:
105105
input_ids = params.get("input_ids", None)
106106
inputs = {"prompt": prompt}
107-
if not input_ids:
107+
if input_ids is not None:
108108
prompt_token_ids = input_ids.tolist()[0]
109109
inputs["prompt_token_ids"] = prompt_token_ids
110110
# ----------------------------------------------------------------

0 commit comments

Comments
 (0)