Skip to content

Conversation

@Cui-yshoho
Copy link
Contributor

@Cui-yshoho Cui-yshoho commented Dec 3, 2025

What does this PR do?

Description

This PR introduces support for running the Flux2Pipeline inference under a distributed setup in MindSpore.

Core Implementation & Rationale

Due to the large size of the FLUX.2 model weights, multi-card execution is mandatory, as the model cannot fit onto a single card.

The implementation achieves memory efficiency and parallelism by:
1. Initializing the process group and setting DATA_PARALLEL mode.
2. Using the prepare_train_network utility with zero_stage=3 to apply ZeRO-3 sharding to the memory-heavy transformer module and text_encoder module.
3. The provided script is a minimal working example of distributed inference.

Sample Code Included

import mindspore as ms
from mindspore import mint
from mindspore.communication.management import GlobalComm
from functools import partial

from mindone.diffusers import Flux2Pipeline
from mindone.trainers.zero import prepare_network


mint.distributed.init_process_group()
ms.set_auto_parallel_context(parallel_mode=ms.ParallelMode.DATA_PARALLEL)
local_rank = mint.distributed.get_rank()

pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-dev", mindspore_dtype=ms.bfloat16)
shard_fn = partial(prepare_network, zero_stage=3, optimizer_parallel_group=GlobalComm.WORLD_COMM_GROUP)
pipe.transformer = shard_fn(pipe.transformer)
pipe.text_encoder = shard_fn(pipe.text_encoder)

# jit
# pipe.transformer.construct = ms.jit(pipe.transformer.construct)

prompt = "A cat holding a sign that says hello world"
image = pipe(prompt=prompt, num_inference_steps=50, guidance_scale=2.5).images[0]
if local_rank ==0:
    image.save("flux.png")

Usage

To execute the provided script (e.g., saved as net.py), use the msrun launch utility. This example starts the script on two workers/cards:

msrun --worker_num=2 --local_worker_num=2 --master_port=8118 --log_dir=msrun_log --join=True --cluster_time_out=300 net.py

Known Issues & Tips

MindSpore 2.7.1 Warning: Users running on MindSpore version 2.7.1 might encounter an AttributeError: 'NoneType' object has no attribute 'total_instance_count'. This is a harmless warning that does not affect the final image output and will be resolved in a subsequent MindSpore release.

Weight Loading Optimization: If you experience slow weight loading times, we recommend merging the optimization introduced in PR: #1422.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline?
  • Did you make sure to update the documentation with your changes? E.g. record bug fixes or new features in What's New. Here are the
    documentation guidelines
  • Did you build and run the code without any errors?
  • Did you report the running environment (NPU type/MS version) and performance in the doc? (better record it for data loading, model inference, or training tasks)
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@xxx

@Cui-yshoho Cui-yshoho requested a review from vigo999 as a code owner December 3, 2025 03:47
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Cui-yshoho, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the MindOne Diffusers library by integrating the Flux2Pipeline for text-to-image generation. The core focus is on enabling efficient multi-card inference for the large FLUX.2 model through ZeRO-3 sharding, making it feasible to run on distributed setups. The changes encompass the addition of new model architectures, VAE components, LoRA loading capabilities, and comprehensive documentation, ensuring a robust and scalable implementation of Flux2 within the MindSpore ecosystem.

Highlights

  • Multi-Card Inference for Flux2: Enabled distributed inference for Flux2Pipeline using ZeRO-3 sharding to handle large model weights, making it feasible to run on distributed setups.
  • New Flux2 Components: Introduced Flux2Transformer2DModel, AutoencoderKLFlux2, and Flux2Pipeline with their respective image processor and output structures into the MindOne Diffusers library.
  • LoRA Support: Added Flux2LoraLoaderMixin and conversion utilities for Flux2 LoRA checkpoints, allowing for efficient loading of LoRA layers.
  • Documentation & Integration: Updated documentation and integrated new Flux2 components across the library, ensuring comprehensive coverage and ease of use.
  • MindSpore Compatibility: Included cartesian_prod for MindSpore compatibility and adjusted AttentionModuleMixin to properly handle QKV projection fusion specific to Flux2's architecture.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Cui-yshoho, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the MindSpore Diffusers library by integrating the FLUX.2 model with robust support for distributed inference. It addresses the memory demands of large models by leveraging ZeRO-3 sharding, allowing the Flux2Pipeline to run efficiently across multiple cards. This integration not frees up memory for larger models but also provides a foundation for future large-scale model deployments, complete with LoRA support and comprehensive testing.

Highlights

  • Multi-card Inference for Flux2 Pipeline: Enabled distributed inference for the Flux2Pipeline in MindSpore, which is crucial for handling the large model weights of FLUX.2.
  • ZeRO-3 Sharding Implementation: Leveraged ZeRO-3 sharding using the prepare_train_network utility for memory-heavy transformer and text_encoder modules, significantly optimizing memory efficiency and parallelism.
  • Flux2 Model Integration: Introduced new model components specific to Flux2, including AutoencoderKLFlux2 and Flux2Transformer2DModel, along with their respective attention mechanisms and utilities, to fully support the Flux2 architecture.
  • LoRA Support for Flux2: Added Flux2LoraLoaderMixin and corresponding conversion utilities to enable seamless loading and application of LoRA layers for the Flux2 model.
  • Comprehensive Documentation and Testing: Included new documentation files for Flux2 models and pipelines, and added dedicated test cases to ensure the functionality and correctness of the new integrations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for multi-card inference for the Flux2Pipeline using ZeRO-3 sharding, which is a significant feature given the large size of the FLUX.2 model. The changes are comprehensive, adding the necessary pipeline, transformer model, VAE, and LoRA loading components for the Flux2 architecture. The implementation appears well-structured and follows existing patterns in the codebase. I've identified a minor area for code simplification in one of the utility functions to improve readability and remove redundancy. Overall, this is a solid contribution.

Comment on lines +3935 to +3946
if "img" in modality_block_name:
# double_blocks.{N}.img_attn.qkv --> transformer_blocks.{N}.attn.{to_q|to_k|to_v}
to_q_weight, to_k_weight, to_v_weight = mint.chunk(fused_qkv_weight, 3, dim=0)
new_q_name = "attn.to_q"
new_k_name = "attn.to_k"
new_v_name = "attn.to_v"
elif "txt" in modality_block_name:
# double_blocks.{N}.txt_attn.qkv --> transformer_blocks.{N}.attn.{add_q_proj|add_k_proj|add_v_proj}
to_q_weight, to_k_weight, to_v_weight = mint.chunk(fused_qkv_weight, 3, dim=0)
new_q_name = "attn.add_q_proj"
new_k_name = "attn.add_k_proj"
new_v_name = "attn.add_v_proj"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The mint.chunk operation to split fused_qkv_weight is performed at the beginning of the if "qkv" in within_block_name: block. The subsequent calls to mint.chunk within the if "img" in modality_block_name: and elif "txt" in modality_block_name: blocks are redundant as they re-assign the same chunked tensors. You can remove these redundant calls to simplify the code and improve clarity.

Suggested change
if "img" in modality_block_name:
# double_blocks.{N}.img_attn.qkv --> transformer_blocks.{N}.attn.{to_q|to_k|to_v}
to_q_weight, to_k_weight, to_v_weight = mint.chunk(fused_qkv_weight, 3, dim=0)
new_q_name = "attn.to_q"
new_k_name = "attn.to_k"
new_v_name = "attn.to_v"
elif "txt" in modality_block_name:
# double_blocks.{N}.txt_attn.qkv --> transformer_blocks.{N}.attn.{add_q_proj|add_k_proj|add_v_proj}
to_q_weight, to_k_weight, to_v_weight = mint.chunk(fused_qkv_weight, 3, dim=0)
new_q_name = "attn.add_q_proj"
new_k_name = "attn.add_k_proj"
new_v_name = "attn.add_v_proj"
if "img" in modality_block_name:
# double_blocks.{N}.img_attn.qkv --> transformer_blocks.{N}.attn.{to_q|to_k|to_v}
new_q_name = "attn.to_q"
new_k_name = "attn.to_k"
new_v_name = "attn.to_v"
elif "txt" in modality_block_name:
# double_blocks.{N}.txt_attn.qkv --> transformer_blocks.{N}.attn.{add_q_proj|add_k_proj|add_v_proj}
new_q_name = "attn.add_q_proj"
new_k_name = "attn.add_k_proj"
new_v_name = "attn.add_v_proj"

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for multi-card inference for the Flux2Pipeline using ZeRO-3 sharding, which is a significant and well-implemented feature. The changes are comprehensive, including the addition of new models, pipelines, documentation, and utilities for LoRA and checkpoint conversion. The code is generally of high quality. I've provided a few minor suggestions for code cleanup, typo fixes, and clarification on certain assumptions to further improve the codebase. Great work on this complex feature!

Comment on lines +6838 to +6842
class LoraLoaderMixin(StableDiffusionLoraLoaderMixin):
def __init__(self, *args, **kwargs):
deprecation_message = "LoraLoaderMixin is deprecated and this will be removed in a future version. Please use `StableDiffusionLoraLoaderMixin`, instead." # noqa: E501
deprecate("LoraLoaderMixin", "1.0.0", deprecation_message)
super().__init__(*args, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The deprecated LoraLoaderMixin class is defined here again. It was removed earlier in this file (in diff 2). This seems to be an unintentional re-addition, possibly from a merge conflict or copy-paste. It should be removed to avoid confusion and keep the codebase clean.

Comment on lines +818 to +821
if img_ids.ndim == 3:
img_ids = img_ids[0]
if txt_ids.ndim == 3:
txt_ids = txt_ids[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic assumes that if img_ids or txt_ids are batched (i.e., have 3 dimensions), all items in the batch share the same IDs, and it proceeds by just taking the first item (img_ids[0]). This could lead to unexpected behavior if the IDs actually differ across the batch. It would be safer to add a comment to clarify this behavior for future developers.

@Cui-yshoho Cui-yshoho force-pushed the flux2 branch 4 times, most recently from 990e8a4 to d6af027 Compare December 3, 2025 07:18
@Cui-yshoho Cui-yshoho added the new model add new model to mindone label Dec 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

new model add new model to mindone

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants