- Could you add Adapter-based finetuning dramatically lowers the memory/barrier to customizing dLLMs? - Fast-dLLM and future dLLMs (Dream, PRISM, etc.) follow a similar block-diffusion recipe; adding supporting for them through `ModelRegistry` would be great! https://github.com/NVlabs/Fast-dLLM/tree/main/v2 https://github.com/ML-GSAI/LLaDA