[ICLR 2026] Official implementation (Claude Agent reproduce supported) of paper "mtLoRA: Scalable Multi-Task Low-Rank Model Adaptation" +2.3% over SOTA with 47% fewer parameters
pytorch llama lora multi-task-learning peft mixture-of-experts llm low-rank-adaptation parameter-efficient-fine-tuning iclr2026 mtlora multi-task-lora scalable-multi-task-adaptation
-
Updated
Mar 4, 2026 - Python