Skip to content

Conversation

@nekoscratch01
Copy link

I met the Problem when I tried to integrated this model: Version Conflict Loop

Current code cannot work with modern transformers versions due to a dependency conflict:

  1. Code uses old transformers APIs (seen_tokens, get_usable_length) → requires transformers < 4.41
  2. diffusers dependency requires transformers >= 4.43
  3. Result: No version of transformers satisfies both requirements

Errors encountered:

  • With transformers >= 4.41: AttributeError: 'DynamicCache' object has no attribute 'seen_tokens'
  • With transformers < 4.41: ImportError: cannot import name 'EncoderDecoderCache' from diffusers

The Solution

1. Update to new transformers API

File: modeling_bailing_moe.py

  • Replace Cache.get_usable_length(seq, layer) → Cache.get_seq_length()
  • Replace Cache.seen_tokens → cache_length

This makes code compatible with transformers 4.41+

2. Remove diffusers dependency

Files: audio_tokenizer/oobleck_distribution.py (new), audio_tokenizer/modeling_audio_vae.py

  • Extract OobleckDiagonalGaussianDistribution class (~50 lines) from diffusers
  • Only this one class was needed, but it pulled in entire diffusers library
  • Eliminates the version conflict entirely

Result

  • Works with transformers 4.55.0
  • No dependency conflicts
  • Smaller dependency footprint

Replace deprecated transformers Cache API:
- Cache.get_usable_length() -> Cache.get_seq_length()
- Cache.seen_tokens -> use cache_length

These APIs were removed in transformers 4.41+, causing AttributeError.
Tested with transformers 4.55.0.
Extract OobleckDiagonalGaussianDistribution to local implementation.

Only one small class was needed from diffusers. Extracting locally to:
- Reduce dependency footprint
- Avoid potential version conflicts
- Maintain identical functionality

Source: diffusers.models.autoencoders.autoencoder_oobleck
@yongjie-lv
Copy link
Collaborator

Thanks for your interest!
On our end, the environment is transformers==4.52.4 and diffusers==0.33.0, and the entire inference process is working correctly. You could try referring to our requirements.txt file, or using the recently published Docker image and Dockerfile. Please give it another try with one of those.
Thanks again for reaching out. Feel free to ask if you have any more questions!

@nekoscratch01
Copy link
Author

Thanks for your response! I’m not sure why it doesn’t work on my end. Currently I am working with a project that tried to integrate Ming into our framework, and I suspected the issue might be related to the latest transformers version.
This PR actually resolves that dependency loop nicely and ensures compatibility with newer transformers releases. Thanks for checking it out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants