Skip to content

[Feature]: Support offline expert load distribution recording #19658

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

jianzs
Copy link
Contributor

@jianzs jianzs commented Jun 15, 2025

As performance requirements become increasingly demanding, we need to optimize every aspect of our system. For MoE models, we can utilize load balancing strategies like EPLB to distribute workloads evenly (ref: #18343). However, this requires easy access to expert load distribution data, which is the primary goal of this PR.

This PR implements APIs to record expert workload, allowing us to analyze and improve model performance. It's compatible with SGLang, so it easily integrates with existing inference systems if users need to use multiple engines simultaneously.

todo:

  • Modify worker and fuse_moe to support expert load distribution recording
  • Add test cases
  • Evaluate performance impact

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@jianzs jianzs marked this pull request as draft June 15, 2025 07:19
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @jianzs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes the necessary infrastructure to support offline recording of expert load distribution in Mixture-of-Experts (MoE) models. It defines the API interface, adds control endpoints to the API server, introduces a configuration environment variable, and wires the calls through the core engine components. Note that the actual implementation of recording and dumping within the worker processes, as well as tests and performance evaluation, are marked as pending in the PR description.

Highlights

  • New Feature: Expert Distribution Recording Infrastructure: Introduces the foundational APIs and plumbing across the engine layers to enable recording and dumping expert load distribution data for MoE models.
  • API Endpoints for Control: Adds new GET endpoints (/start_expert_distribution_record, /stop_expert_distribution_record, /dump_expert_distribution_record) to the OpenAI API server, allowing external control over the recording process. These endpoints are enabled when the VLLM_EXPERT_DISTRIBUTION_RECORDER_DIR environment variable is set.
  • Configuration via Environment Variable: Adds the VLLM_EXPERT_DISTRIBUTION_RECORDER_DIR environment variable to control whether the recording feature is active and to specify the directory where dumped data should be stored.
  • Engine Protocol and Implementation Stubs: Adds abstract methods to the EngineProtocol and implements/delegates these methods through AsyncLLMEngine, LLMEngine, EngineCoreClient, and Executor to wire the functionality down to the executor level. The actual recording logic within workers/executors is not included in this patch.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

mergify bot commented Jun 15, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @jianzs.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 15, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR introduces expert load distribution recording. Consider standardizing API endpoint methods and response formats, ensuring type hint completeness, and addressing the TODOs for testing, performance evaluation, and worker modifications.

Comment on lines 88 to 113
def expert_distribution_record(self, is_start: bool) -> None:
raise NotImplementedError

def dump_expert_distribution_record(self) -> None:
raise NotImplementedError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

expert_distribution_record and dump_expert_distribution_record are not implemented in SyncMPClient. Implement these methods in SyncMPClient or document the limitation.

@jianzs jianzs force-pushed the feat/exp-load-record branch from 612881c to 80299e0 Compare June 15, 2025 07:21
@mergify mergify bot removed the needs-rebase label Jun 15, 2025
@jianzs
Copy link
Contributor Author

jianzs commented Jun 15, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the API layer for recording expert load distribution in MoE models. The changes span across various engine components, including async and sync engines, multiprocessing utilities, and API server endpoints. The overall structure is consistent with existing patterns for adding new engine functionalities.

The main area for improvement is the consistent use of return type hints (-> None) for methods that do not return a value, as per PEP 484. This will enhance code clarity and maintainability.

The PR description includes a TODO list for implementing the actual recording logic, adding test cases, and evaluating performance. These are important next steps for this feature.

jianzs and others added 4 commits June 15, 2025 17:03
Signed-off-by: Jade Zheng <[email protected]>

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Jade Zheng <[email protected]>

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Jade Zheng <[email protected]>

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Jade Zheng <[email protected]>

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
jianzs and others added 2 commits June 15, 2025 17:03
Signed-off-by: Jade Zheng <[email protected]>

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Jade Zheng <[email protected]>

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Copy link

mergify bot commented Jun 19, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @jianzs.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant