[https://nvbugs/5919026][fix] Pass sparse_attn_config from effective_draft_config for one-model draft KV cache#12032
Conversation
|
/bot run --disable-fail-fast --stage-list "DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-1,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-2,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-3,DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-4,GB200-4_GPUs-PyTorch-PerfSanity-Post-Merge-1,GB200-4_GPUs-PyTorch-PerfSanity-Post-Merge-2,GB200-4_GPUs-PyTorch-PerfSanity-Post-Merge-3,GB200-4_GPUs-PyTorch-PerfSanity-Post-Merge-4,GB200-4_GPUs-PyTorch-PerfSanity-Post-Merge-5,GB200-4_GPUs-PyTorch-PerfSanity-Post-Merge-6,GB200-4_GPUs-PyTorch-PerfSanity-Post-Merge-7" |
📝 WalkthroughWalkthroughRemoves a defensive guard in sparse attention indexer preparation that previously skipped setup when kv_cache_manager lacked index_head_dim. Draft KV-cache creation now derives sparse attention config to enable proper handling in multi-token prediction scenarios. Two previously skipped performance tests are re-enabled. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
PR_Github #38252 [ run ] triggered by Bot. Commit: |
|
PR_Github #38252 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #38345 [ run ] triggered by Bot. Commit: |
…draft_config for one-model draft KV cache In _create_one_model_draft_kv_cache_manager, the sparse_attn_config was hardcoded to None. However, for MTP with models using sparse attention (e.g., DeepSeek V3 with DSA), the draft layers share the same architecture as the target model and need the sparse_attention_config. The fix gets sparse_attn_config from effective_draft_config, which falls back to the target model's config for MTP mode. This ensures DSACacheManager is properly initialized with the required index_head_dim and other parameters. Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
0dc5178 to
a2ec3b9
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #38345 [ run ] completed with state
|
|
PR_Github #38410 [ run ] triggered by Bot. Commit: |
Summary by CodeRabbit
Bug Fixes
Tests
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.