-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
WIP [P/D] Use ThreadPoolExecutor to do handshake for each P-D pair #19823
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Linkun Chen <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @lk-chen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the efficiency of KV cache transfer by offloading the nixl_handshake
process to a background thread pool. This change ensures that handshake operations, which can be time-consuming, no longer block the main execution flow, leading to smoother and more responsive KV transfer initiation. It also refines the reporting of KV transfer statuses to include requests that are pending handshake completion.
Highlights
- Asynchronous Handshake: Implemented
_nixl_handshake
usingThreadPoolExecutor
to perform handshakes in the background, preventing blocking of the main thread during KV transfer operations. This allows the system to initiate handshakes for new P-D (Producer-Consumer) pairs without waiting for completion. - Enhanced KVTransferResult: The
KVTransferResult
dataclass has been extended to includepending_handshake
information, providing a more comprehensive status of KV transfer requests, including those awaiting handshake completion. - Request Management for Handshakes: Introduced mechanisms (
_handshake_futures
,_pending_requests
,_ready_requests
) to manage requests that are waiting for a handshake to complete or are ready to proceed after a successful handshake. Requests are queued and processed asynchronously. - Multi-Rank Coordination Update: The coordination logic for
get_finished
across multiple Tensor Parallel (TP) ranks has been updated to correctly aggregate and reportpending_handshake
statuses in addition to finished sending and receiving requests. - Type Hinting Improvements: Added
EngineId
andReqId
type aliases and applied them consistently across relevant classes and methods (ReqMeta
,NixlConnectorMetadata
,NixlConnectorWorker
, etc.) for improved code clarity and maintainability.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the NIXL connector to perform handshakes asynchronously using a ThreadPoolExecutor
. This is a positive change for non-blocking behavior. The get_finished
interface is updated to return KVTransferResult
, incorporating pending handshake information. My review focuses on the correctness of the asynchronous handshake logic, thread safety, error handling, and overall clarity.
# Clean up futures. In case of failure, requests will remain | ||
# pending and be reported to scheduler for retry. | ||
del self._handshake_futures[engine_id] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment "In case of failure, requests will remain pending and be reported to scheduler for retry" seems to conflict with the action in the except
block (line 504) where self._pending_requests.pop(engine_id, None)
is called. Please clarify if the intent is for these specific requests to be retried by the scheduler or if they should remain in _pending_requests
.
def get_pending_handshake_req_ids(self) -> set[str]: | ||
"""Get request IDs that are currently pending handshake completion.""" | ||
if self.connector_worker is not None: | ||
result = self.connector_worker.get_finished(set()) | ||
return result.pending_handshake | ||
return set() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -178,7 +190,7 @@ class NixlConnectorScheduler: | |||
def __init__(self, vllm_config: VllmConfig, engine_id: str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency with the type hint of self.engine_id
(which is EngineId
), consider typing the engine_id
parameter in the __init__
signature as EngineId
as well.
def __init__(self, vllm_config: VllmConfig, engine_id: str): | |
def __init__(self, vllm_config: VllmConfig, engine_id: EngineId): |
if self._nixl_handshake_listener_t: | ||
self._nixl_handshake_listener_t.join(timeout=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""Start handshake using ThreadPoolExecutor. | ||
|
||
This method is non-blocking and submits `_nixl_handshake` to the | ||
background thread pool. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger.debug( | ||
"Handshake completed for engine %s. " | ||
"Moved %d requests to ready queue for processing", | ||
engine_id, len(completed_reqs)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logging message here has extra leading spaces in the format string. This might be unintentional and could affect log formatting.
logger.debug( | |
"Handshake completed for engine %s. " | |
"Moved %d requests to ready queue for processing", | |
engine_id, len(completed_reqs)) | |
logger.debug( | |
"Handshake completed for engine %s. " | |
"Moved %d requests to ready queue for processing", | |
engine_id, len(completed_reqs)) |
logger.warning( | ||
"Handshake failed for engine %s, leaving" | ||
"%d requests pending for scheduler retry", | ||
engine_id, len(failed_reqs)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with self._lock: | ||
pending_handshake = set() | ||
for pending_reqs in self._pending_requests.values(): | ||
pending_handshake.update({req_id for req_id, _ in pending_reqs}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rank_data = self.tp_group.recv_object(src=i) | ||
other_rank_result = KVTransferResult.from_tuple( | ||
rank_data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
close in favor of #19836 |
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
Split from #19447, this PR keeps using zmq for nixl metadata transfer, but uses
ThreadPoolExecutor
to do_nixl_handshake
in background.closes #19777
Test Plan
Unit test WIP
Test Result
(Optional) Documentation Update