Skip to content

Conversation

@Begunner
Copy link
Collaborator

@Begunner Begunner commented Feb 9, 2026

What does this PR do?

Add a monkey patch to force to release the memory of input tensors and their grads in CheckpointFunction.

Unreleased GPU memory occurs during RL or SFT training of MoE models (e.g., Qwen3-30B-A3B/Qwen3-VL-30B-A3B) using image verlai/verl:vllm012.dev3 with checkpoint enabled (recompute_method=uniform, recompute_granularity=full, recompute_num_layers=1). Key observations:

  • No residual memory with dense models (e.g., Qwen3-VL-2B/Qwen3-8B) or without checkpoint.
  • Residual memory size correlates positively with recompute_num_layers (halved when set to 2).

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a monkey patch to address a GPU memory leak in Megatron's checkpointing mechanism when used with Mixture-of-Experts (MoE) models. The patch manually releases memory for input tensors and their gradients within the CheckpointFunction.backward method. The approach is sound for fixing the leak. However, the patch is applied unconditionally, which could pose a risk to non-MoE models. I've suggested making its application conditional based on whether an MoE model is in use.

Comment on lines 101 to 104
# Apply checkpoint patch for MoE models
from verl.models.mcore.patch import apply_patch_checkpoint

apply_patch_checkpoint()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The comment "Apply checkpoint patch for MoE models" indicates this patch is specific to Mixture-of-Experts models. However, it's being applied unconditionally. This could introduce risks or unintended side effects for non-MoE models that also use checkpointing. It would be safer to apply this patch conditionally, only when an MoE model is detected. You can check for this using self.engine_config.expert_model_parallel_size > 1.

Suggested change
# Apply checkpoint patch for MoE models
from verl.models.mcore.patch import apply_patch_checkpoint
apply_patch_checkpoint()
# Apply checkpoint patch for MoE models
if self.engine_config.expert_model_parallel_size > 1:
from verl.models.mcore.patch import apply_patch_checkpoint
apply_patch_checkpoint()

@ISEEKYAN
Copy link
Collaborator

megatron PR link NVIDIA/Megatron-LM#3267

ISEEKYAN
ISEEKYAN previously approved these changes Feb 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants