chatglm 6b finetuning and alpaca finetuning
-
Updated
Apr 21, 2024 - Python
chatglm 6b finetuning and alpaca finetuning
deep learning
Optimizing the Differentiable Search Index (DSI) with data augmentation (Num2Word, Stopwords Removal, POS-MLM) and parameter-efficient fine-tuning (LoRA, QLoRA, AdaLoRA, ConvoLoRA), improving retrieval accuracy and efficiency while reducing memory and computational overhead. Evaluated on the MS MARCO dataset for scalable performance.
Add a description, image, and links to the adalora topic page so that developers can more easily learn about it.
To associate your repository with the adalora topic, visit your repo's landing page and select "manage topics."