Skip to content

Conversation

@PeterSH6
Copy link
Collaborator

For each trajectory, if we use the same request_ids, it would be easier for us to track its progress in the vllm/sglang/trtllm backend.

Also, it would be easier for us to abort one trajectory when needed.

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request modifies the generate method in AsyncLLMServerManager to use the same request_id for each rollout turn, which is intended to improve tracking and aborting trajectories in the backend. The change involves replacing uuid4().hex with the input request_id when calling server.generate.remote. This change seems reasonable and aligns with the stated goal of better tracking. However, it's crucial to ensure that the request_id is properly managed and unique across different trajectories to avoid unintended side effects.

server = self._choose_server(request_id)
output = await server.generate.remote(
request_id=uuid4().hex, # use new request_id for each turn
request_id=request_id, # use the same request_id for better tracking
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Using the same request_id across multiple turns could lead to issues if the backend relies on unique request_ids for identifying individual requests. This might cause incorrect caching or interference between turns. It's important to ensure that the backend system can handle the same request_id for related turns within a trajectory without any conflicts or errors.

Consider adding a mechanism to ensure uniqueness across different trajectories, perhaps by incorporating a trajectory ID or session ID into the request_id.

Suggested change
request_id=request_id, # use the same request_id for better tracking
request_id=f"{request_id}_turn", # use the same request_id for better tracking

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a bug in AsyncLLMServerManager where a new, random request_id was generated for each generation turn, ignoring the request_id associated with the trajectory. By propagating the correct request_id to the backend server, this change enables consistent tracking of multi-turn rollouts and ensures sticky sessions function as intended for prefix caching. The fix is sound and aligns with the stated goals of improving traceability and control over rollouts.

@wuxibin89
Copy link
Collaborator

There's an issue report that same request_id may lead to potential conflict.

server = self._choose_server(request_id)
output = await server.generate.remote(
request_id=uuid4().hex, # use new request_id for each turn
request_id=request_id, # use the same request_id for better tracking
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We better to use new request_id for each turn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants