The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
-
Updated
Jul 11, 2024
The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.
[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"
memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B
[ECCV2024] The code of "SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning"
Add a description, image, and links to the memory-efficient-tuning topic page so that developers can more easily learn about it.
To associate your repository with the memory-efficient-tuning topic, visit your repo's landing page and select "manage topics."