fix: unify qwen tts cache dir for tokenizer loading on Windows#218
fix: unify qwen tts cache dir for tokenizer loading on Windows#218seidenbergerscott wants to merge 1 commit intojamiepine:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe PyTorchTTSBackend model loading is updated to centralize Hugging Face Hub cache directory handling by importing HF_HUB_CACHE and applying it to model loading calls. Additionally, the torch_dtype parameter is replaced with dtype, specifying float32 for CPU and bfloat16 for GPU paths. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
Summary
Why
On Windows local setups, model assets can split between .hf-cache/hub and .hf-cache/transformers, causing speech_tokenizer/preprocessor_config.json load errors and 500s during generation.
Summary by CodeRabbit