Skip to content

Conversation

@Cui-yshoho
Copy link
Contributor

What does this PR do?

Fixes # (issue)

Adds # (feature)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline?
  • Did you make sure to update the documentation with your changes? E.g. record bug fixes or new features in What's New. Here are the
    documentation guidelines
  • Did you build and run the code without any errors?
  • Did you report the running environment (NPU type/MS version) and performance in the doc? (better record it for data loading, model inference, or training tasks)
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@xxx

@Cui-yshoho Cui-yshoho requested a review from vigo999 as a code owner December 4, 2025 02:01
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Cui-yshoho, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new multimodal processor for Qwen2-VL models, significantly refactors the Wan-VACE transformer architecture, and enhances several pipelines with improved type hinting and documentation. It also addresses cross-version compatibility for various pipeline tests, ensuring greater stability and robustness across different library environments. The changes aim to fix bugs and improve the overall functionality and maintainability of the mindone/diffusers components.

Highlights

  • Qwen2-VL Multimodal Processor: A new Qwen2VLProcessor has been introduced in mindone.transformers to streamline the handling of multimodal inputs (images, videos, and text) for Qwen2-VL models, improving their integration and processing logic.
  • Wan-VACE Transformer Enhancements: The WanVACETransformer3DModel has undergone significant refactoring, including the adoption of WanAttention and CacheMixin, along with adjustments to attention layer parameters and optimizations for tensor operations. A new runtime check for LoRA scaling has also been added.
  • Pipeline Documentation and Type Consistency: Several pipelines received updates to their type hints for improved compatibility (e.g., list[Type] | None to Union[list[Type], None], ms.dtype to ms.Type). The WanVACEPipeline now includes a comprehensive usage example in its docstring.
  • Cross-Version Test Compatibility: Conditional skip logic has been added to numerous pipeline tests, allowing them to adapt to different versions of the diffusers and transformers libraries and prevent failures due to API changes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a variety of fixes and refactorings across the diffusers library. Key improvements include updating the transformer_wan_vace.py to use a new WanAttention class, correcting type hints for better compatibility (e.g., ms.dtype to ms.Type), and adding support for Qwen2VLProcessor. The changes generally enhance code quality and correctness. My review identifies one high-severity issue in pipeline_wan_vace.py where a necessary dtype conversion was removed, which could lead to runtime errors. I have provided a suggestion to fix this.


conditioning_latents = self.prepare_video_latents(video, mask, reference_images, generator)
mask = self.prepare_masks(mask, reference_images, generator).to(conditioning_latents.dtype)
mask = self.prepare_masks(mask, reference_images, generator)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The removal of .to(conditioning_latents.dtype) could introduce a dtype mismatch during the mint.cat operation on the next line. The conditioning_latents tensor has vae_dtype, while the mask tensor from prepare_masks appears to have a float32 dtype originating from the preprocess_conditions call. If vae_dtype is not float32 (e.g., bfloat16), this will likely cause a runtime error. It's safer to ensure both tensors have the same dtype before concatenation.

Suggested change
mask = self.prepare_masks(mask, reference_images, generator)
mask = self.prepare_masks(mask, reference_images, generator).to(conditioning_latents.dtype)

self,
prompt: Union[str, List[str]] = None,
dtype: Optional[ms.dtype] = None,
dtype: Optional[ms.Type] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type hint for dtype has been corrected from Optional[ms.dtype] to Optional[ms.Type]. This is a good fix, as ms.Type is the correct way to hint a type class in MindSpore, whereas ms.dtype refers to an instance of a dtype. This improves type correctness and clarity.

prompt: Union[str, List[str]] = None,
image: Optional[ms.tensor] = None,
dtype: Optional[ms.dtype] = None,
dtype: Optional[ms.Type] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type hint for dtype has been corrected from Optional[ms.dtype] to Optional[ms.Type]. This is a good fix, as ms.Type is the correct way to hint a type class in MindSpore, whereas ms.dtype refers to an instance of a dtype. This improves type correctness and clarity.

prompt: Union[str, List[str]] = None,
image: Optional[ms.tensor] = None,
dtype: Optional[ms.dtype] = None,
dtype: Optional[ms.Type] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type hint for dtype has been corrected from Optional[ms.dtype] to Optional[ms.Type]. This is a good fix, as ms.Type is the correct way to hint a type class in MindSpore, whereas ms.dtype refers to an instance of a dtype. This improves type correctness and clarity.

self,
prompt: Union[str, List[str]] = None,
dtype: Optional[ms.dtype] = None,
dtype: Optional[ms.Type] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type hint for dtype has been corrected from Optional[ms.dtype] to Optional[ms.Type]. This is a good fix, as ms.Type is the correct way to hint a type class in MindSpore, whereas ms.dtype refers to an instance of a dtype. This improves type correctness and clarity.

self,
prompt: Union[str, List[str]] = None,
dtype: Optional[ms.dtype] = None,
dtype: Optional[ms.Type] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type hint for dtype has been corrected from Optional[ms.dtype] to Optional[ms.Type]. This is a good fix, as ms.Type is the correct way to hint a type class in MindSpore, whereas ms.dtype refers to an instance of a dtype. This improves type correctness and clarity.

@Cui-yshoho Cui-yshoho force-pushed the fixbug_fast branch 2 times, most recently from 4d6a0f4 to 06a0fd3 Compare December 4, 2025 02:18
@Cui-yshoho Cui-yshoho added the bug Something isn't working label Dec 4, 2025
@vigo999 vigo999 added this to mindone Dec 10, 2025
@vigo999 vigo999 moved this to In Progress in mindone Dec 10, 2025
@vigo999 vigo999 added this to the v0.5.0 milestone Dec 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

3 participants