Skip to content

Conversation

shanjiaz
Copy link
Collaborator

SUMMARY:
Added qwen3 next fp8 quantization example. Model produced is uploaded here

TEST PLAN:
Tested locally.

Signed-off-by: shanjiaz <zsjwpianpian@gmail.com>
Copy link
Contributor

Summary of Changes

Hello @shanjiaz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new example that showcases the application of FP8 dynamic quantization to the Qwen3-Next-80B-A3B-Instruct model using the llmcompressor framework. The primary goal is to provide a clear demonstration of how to reduce the model's memory footprint and improve inference efficiency while maintaining reasonable performance, by selectively quantizing key components and excluding others.

Highlights

  • New Example Added: A new example script, qwen3_next_example.py, has been added to demonstrate FP8 dynamic quantization for the Qwen3-Next-80B-A3B-Instruct model.
  • FP8 Dynamic Quantization: The example utilizes the llmcompressor library to apply FP8 dynamic quantization to 'Linear' layers of the Qwen3-Next model.
  • Layer Exclusion: Specific layers such as lm_head, MLP gate layers, and attention-related linear layers are explicitly ignored during the quantization process.
  • Model Verification and Saving: The script includes steps to perform a sample generation to confirm the functionality of the quantized model and then saves the compressed model and tokenizer to disk.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new example script for FP8 quantization of the Qwen3-Next model. The script is well-structured and follows existing patterns. My main feedback is to improve the clarity of the quantization recipe by adding comments to explain why a large number of modules are being ignored, as this is not standard practice and could be confusing for users referencing this example.

Signed-off-by: shanjiaz <zsjwpianpian@gmail.com>
Signed-off-by: shanjiaz <zsjwpianpian@gmail.com>
Copy link
Collaborator

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One small comment, otherwise LGTM

@dsikka dsikka added the ready When a PR is ready for review label Oct 1, 2025
@dsikka dsikka changed the title add qwen3 next example [Qwen3Nex] Add FP8 Quantization Example Oct 1, 2025
@dsikka dsikka changed the title [Qwen3Nex] Add FP8 Quantization Example [Qwen3Next] Add FP8 Quantization Example Oct 1, 2025
Signed-off-by: shanjiaz <zsjwpianpian@gmail.com>
rahul-tuli
rahul-tuli previously approved these changes Oct 1, 2025
Copy link
Collaborator

@rahul-tuli rahul-tuli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Signed-off-by: shanjiaz <zsjwpianpian@gmail.com>
@dsikka dsikka enabled auto-merge (squash) October 1, 2025 13:30
@dsikka dsikka merged commit 815a4ff into main Oct 1, 2025
8 checks passed
@dsikka dsikka deleted the hz-add-qwen3-next-example branch October 1, 2025 13:31
dsikka added a commit that referenced this pull request Oct 1, 2025
SUMMARY:
- Need to update links when the following PRs land:

1. #1886
2. #1874
3. #1889
@ng-blip
Copy link

ng-blip commented Oct 1, 2025

Hello!

  1. Looks like model.safetensors.index.json file here https://huggingface.co/shanjiaz/qwen3-80b-fp8-dynamic doesn't contain any 'mtp.*' layers anymore (but original models does).
    Does llmcompressor 0.8.0 support exporting MTP layers (multi-token-prediction speculative decoding) now? Do you have any plan for future support?
  2. The receipe of the model ignores 're:.self_attn.', but the qwen3_next_example.py doesn't show this layer exclusion. What is the best approach?

Thank you.

cajeonrh pushed a commit to cajeonrh/llm-compressor that referenced this pull request Oct 2, 2025
SUMMARY:
Added qwen3 next fp8 quantization example. Model produced is uploaded
[here](https://huggingface.co/shanjiaz/qwen3-80b-fp8-dynamic)

TEST PLAN:
Tested locally.

---------

Signed-off-by: shanjiaz <zsjwpianpian@gmail.com>
Signed-off-by: Cassie Jeon <cajeon@redhat.com>
cajeonrh pushed a commit to cajeonrh/llm-compressor that referenced this pull request Oct 2, 2025
SUMMARY:
- Need to update links when the following PRs land:

1. vllm-project#1886
2. vllm-project#1874
3. vllm-project#1889

Signed-off-by: Cassie Jeon <cajeon@redhat.com>
@shanjiaz
Copy link
Collaborator Author

shanjiaz commented Oct 3, 2025

Hello!

  1. Looks like model.safetensors.index.json file here https://huggingface.co/shanjiaz/qwen3-80b-fp8-dynamic doesn't contain any 'mtp.*' layers anymore (but original models does).
    Does llmcompressor 0.8.0 support exporting MTP layers (multi-token-prediction speculative decoding) now? Do you have any plan for future support?
  2. The receipe of the model ignores 're:.self_attn.', but the qwen3_next_example.py doesn't show this layer exclusion. What is the best approach?

Thank you.

@ng-blip Thanks for reaching out!

  1. Seems like the mtp layers are ignored for qwen3 next model in transformers. Check here.
  2. Feel free to quantize the self_attn layers! The model you mentioned there was just for testing. Please refer to examples. : )

@dsikka dsikka added qwen For any PR / issue related to Qwen support fp8 For any issue / PR related to FP8 support labels Oct 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fp8 For any issue / PR related to FP8 support qwen For any PR / issue related to Qwen support ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants