-
What is it: UPop is the first structured pruning framework for vision-language Transformers. It enables effective structured pruning on various multi-modal & uni-modal tasks (including Visual Reasoning, Image Captioning, Visual Question Answer, Image-Text Retrieval, Text-Image Retrieval, Image Classification and Image Segmentation), datasets (including NLVR2, COCO Caption, VQAv2, COCO, Flickr30K, ImageNet and ADE20K), and model architectures (including BLIP, CLIP, DeiT and Segmenter).
overview.mp4
-
What challenge does it tackle: The above video demonstrates that Unified Search adopted by UPop rescues us from the burden of repeated experiments (e.g., doing grid search) for searching optimal compression ratios among different modalities and structures. Furthermore, Progressive Pruning adopted by UPop eliminates the weight gap between the searched model and the pruned subnet to be retrained, therefore gaining better convergence and performance, especially at high compression ratios.
-
How about the performance: On multimodal tasks, for example, UPop can achieve 2x compression with only 1.2% and 2.0% accuracy loss on the VQAv2 dataset for Visual Question Answer and the NLVR2 dataset for Visual Reasoning, respectively. On unimodal tasks, for example, UPop can achieve 1.5x and 1.2x compression without any loss of accuracy on the ImageNet dataset for Image Classification and the ADE20K dataset for Image Segmentation, respectively. Some examples of vector-level structured granularity are as follows.
Example (Task β’ Dataset β’ Model β’ Metric) Performance Parameters (M) FLOPs (G) Visual Reasoning β’ NLVR2 β’ BLIP β’ Acc $83.1 \rightarrow 81.1_{\color{red}\downarrow 2.0}$ $259.5 \rightarrow 150.2_{\color{ForestGreen}\downarrow 42\%}$ $132.5 \rightarrow 89.4_{\color{ForestGreen}\downarrow 33\%}$ Image Caption β’ Caption COCO β’ BLIP β’ SPICE $23.8 \rightarrow 23.3_{\color{red}\downarrow 0.5}$ $224.0 \rightarrow 127.1_{\color{ForestGreen}\downarrow 43\%}$ $65.7 \rightarrow 39.8_{\color{ForestGreen}\downarrow 39\%}$ Visual Question Answer β’ VQAv2 β’ BLIP β’ Acc $77.5 \rightarrow 76.3_{\color{red}\downarrow 1.2}$ $361.6 \rightarrow 211.3_{\color{ForestGreen}\downarrow 42\%}$ $186.1 \rightarrow 109.4_{\color{ForestGreen}\downarrow 41\%}$ Image-Text Retrieval β’ COCO β’ BLIP β’ R@1 $81.9 \rightarrow 77.4_{\color{red}\downarrow 4.5}$ $447.6 \rightarrow 248.9_{\color{ForestGreen}\downarrow 44\%}$ $153.2\rightarrow 88.3_{\color{ForestGreen}\downarrow 42\%}$ Image-Text Retrieval β’ COCO β’ CLIP β’ R@1 $71.5 \rightarrow 70.8_{\color{red}\downarrow 0.7}$ $856.0 \rightarrow 473.7_{\color{ForestGreen}\downarrow 45\%}$ $395.7\rightarrow 196.3_{\color{ForestGreen}\downarrow 50\%}$ Text-Image Retrieval β’ COCO β’ BLIP β’ R@1 $64.3\rightarrow 59.8_{\color{red}\downarrow 4.5}$ $447.6 \rightarrow 248.9_{\color{ForestGreen}\downarrow 44\%}$ $153.2\rightarrow 88.3_{\color{ForestGreen}\downarrow 42\%}$ Text-Image Retrieval β’ COCO β’ CLIP β’ R@1 $56.8\rightarrow 53.1_{\color{red}\downarrow 3.7}$ $856.0 \rightarrow 473.7_{\color{ForestGreen}\downarrow 45\%}$ $395.7\rightarrow 196.3_{\color{ForestGreen}\downarrow 50\%}$ Image-Text Retrieval β’ Flickr30K β’ BLIP β’ R@1 $96.8\rightarrow 92.2_{\color{red}\downarrow 4.4}$ $447.6\rightarrow 250.5_{\color{ForestGreen}\downarrow 44\%}$ $153.2\rightarrow 91.0_{\color{ForestGreen}\downarrow 41\%}$ Image-Text Retrieval β’ Flickr30K β’ CLIP β’ R@1 $96.8\rightarrow 93.2_{\color{red}\downarrow 3.6}$ $856.0\rightarrow 474.3_{\color{ForestGreen}\downarrow 45\%}$ $395.7 \rightarrow 201.1_{\color{ForestGreen}\downarrow 49\%}$ Text-Image Retrieval β’ Flickr30K β’ BLIP β’ R@1 $86.9 \rightarrow 82.0_{\color{red}\downarrow 4.9}$ $447.6\rightarrow 250.5_{\color{ForestGreen}\downarrow 44\%}$ $153.2\rightarrow 91.0_{\color{ForestGreen}\downarrow 41\%}$ Text-Image Retrieval β’ Flickr30K β’ CLIP β’ R@1 $86.6\rightarrow 80.5_{\color{red}\downarrow 6.1}$ $856.0\rightarrow 474.3_{\color{ForestGreen}\downarrow 45\%}$ $395.7 \rightarrow 201.1_{\color{ForestGreen}\downarrow 49\%}$ Classification β’ ImageNet β’ DeiT β’ Acc@1 $79.9\rightarrow 80.2_{\color{ForestGreen}\uparrow 0.3}$ $22.0 \rightarrow 15.7_{\color{ForestGreen}\downarrow 29\%}$ $4.6 \rightarrow 3.2_{\color{ForestGreen}\downarrow 30\%}$ Classification β’ ImageNet β’ DeiT β’ Acc@5 $95.0 \rightarrow 95.1_{\color{ForestGreen}\uparrow 0.1}$ $22.0 \rightarrow 15.7_{\color{ForestGreen}\downarrow 29\%}$ $4.6 \rightarrow 3.2_{\color{ForestGreen}\downarrow 30\%}$ Segmentation β’ ADE20K β’ Segmenter β’ $\text{mIoU}^s$ $45.3\rightarrow 45.3_{\color{ForestGreen}\uparrow 0.0}$ $26.4 \rightarrow 21.5_{\color{ForestGreen}\downarrow 19\%}$ $38.6 \rightarrow 30.4_{\color{ForestGreen}\downarrow 21\%}$ Segmentation β’ ADE20K β’ Segmenter β’ $\text{mIoU}^m$ $46.9 \rightarrow 47.1_{\color{ForestGreen}\uparrow 0.2}$ $26.4 \rightarrow 21.5_{\color{ForestGreen}\downarrow 19\%}$ $38.6 \rightarrow 30.4_{\color{ForestGreen}\downarrow 21\%}$
-
(Jun 2023), we worked on a new project CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers, which reduces computational costs effectively for accelerating. [Paper] [Code] π‘
-
(Jun 30, 2023), we released the
implementation
,scripts
,checkpoints
, andlogs
. [Code] [Website] π© -
(Apr 25, 2023), our work UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers was accepted by ICML 2023. [Paper] [ArXiv] π
The code is tested on Pytorch==1.11.0
, cuda==11.3.1
, and python==3.8.13
. The dependencies can be installed by:
conda env create -f environment.yml
The status of installing dependencies:
-
Dataset & Annotation
Download the NLVR2 dataset, unzip it under the
datasets
folder, and accordingly modify theimage_root
in config. Download all-in-one annotations (including annotations for Visual Reasoning, Image Caption, VQA, Image-Text Retrieval, and Text-Image Retrieval tasks) from Google Drive or Baidu Drive, unzip it under theannotation
folder, and accordingly modify theannotation
in config. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, and accordingly modify the--pretrained
of the scripts. For example, to evaluate a 2x compressed model:python -m torch.distributed.run --nproc_per_node=8 compress_nlvr.py --evaluate \ --pretrained output/nlvr_nlvr2_compression_2x/model_base_nlvr_nlvr2_2x_compressed.pth \ --config ./configs/nlvr.yaml \ --output_dir output/nlvr_nlvr2_compression_2x
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, and accordingly modify thepretrained
in config. For example, to conduct a 2x compression on 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 compress_nlvr.py --p 0.5 --epoch 15 \ --pretrained pretrained/model_base_nlvr.pth \ --config ./configs/nlvr.yaml \ --output_dir output/nlvr_nlvr2_compression_2x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 2x Google/Baidu Link Google/Baidu Google/Baidu Link 3x Google/Baidu Link Google/Baidu Google/Baidu Link 4x Google/Baidu Link Google/Baidu Google/Baidu Link 5x Google/Baidu Link Google/Baidu Google/Baidu Link 10x Google/Baidu Link Google/Baidu Google/Baidu Link
-
Dataset & Annotation
Download the COCO Caption dataset, unzip it under the
datasets
folder, and accordingly modify theimage_root
in config. Download all-in-one annotations from Google Drive or Baidu Drive, unzip it under theannotation
folder, and accordingly modify theannotation
in config. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, and accordingly modify the--pretrained
of the scripts. For example, to evaluate a 2x compressed model:python -m torch.distributed.run --nproc_per_node=8 compress_caption.py --evaluate \ --pretrained output/caption_coco_compression_2x/model_base_caption_capfilt_large_coco_2x_compressed.pth \ --config ./configs/caption_coco.yaml \ --output_dir output/caption_coco_compression_2x
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, and accordingly modify thepretrained
in config. For example, to conduct a 2x compression on 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 compress_caption.py --p 0.5 --epoch 5 \ --pretrained pretrained/model_base_caption_capfilt_large.pth \ --config ./configs/caption_coco.yaml \ --output_dir output/caption_coco_compression_2x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 2x Google/Baidu Link Google/Baidu Google/Baidu Link 4x Google/Baidu Link Google/Baidu Google/Baidu Link
-
Dataset & Annotation
Download the VQAv2 dataset and Visual Genome dataset, unzip them under the
datasets
folder, and accordingly modify theimage_root
in config. Download all-in-one annotations from Google Drive or Baidu Drive, unzip it under theannotation
folder, and accordingly modify theannotation
in config. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, and accordingly modify the--pretrained
of the scripts. For example, to evaluate a 2x compressed model:[!Note] Note that the scripts will generate answers
vqa_result.json
, which should be submitted to the official server to obtain evaluation results.python -m torch.distributed.run --nproc_per_node=8 compress_vqa.py --evaluate \ --pretrained output/vqa_vqa2_compression_2x/model_base_vqa_capfilt_large_vqa2_2x_compressed.pth \ --config ./configs/vqa.yaml \ --output_dir output/vqa_vqa2_compression_2x
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, and accordingly modify thepretrained
in config. For example, to conduct a 2x compression on 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 compress_vqa.py --p 0.5 --epoch 10 \ --pretrained pretrained/model_base_vqa_capfilt_large.pth \ --config ./configs/vqa.yaml \ --output_dir output/vqa_vqa2_compression_2x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 2x Google/Baidu Link Google/Baidu Google/Baidu Link 4x Google/Baidu Link Google/Baidu Google/Baidu Link
-
Dataset & Annotation
Download the COCO dataset, unzip it under the
datasets
folder, and accordingly modify theimage_root
in config. Download all-in-one annotations from Google Drive or Baidu Drive, unzip it under theannotation
folder, and accordingly modify theannotation
in config. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, and accordingly modify the--pretrained
of the scripts. For example, to evaluate a 2x compressed model:python -m torch.distributed.run --nproc_per_node=8 compress_retrieval.py --evaluate \ --pretrained output/retrieval_coco_compression_2x/model_base_retrieval_coco_2x_compressed.pth --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco_compression_2x
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, and accordingly modify thepretrained
in config. For example, to conduct a 2x compression on 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 compress_retrieval.py --p 0.5 --epoch 6 \ --pretrained pretrained/model_base_retrieval_coco.pth \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco_compression_2x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 2x Google/Baidu Link Google/Baidu Google/Baidu Link 4x Google/Baidu Link Google/Baidu Google/Baidu Link
-
Dataset & Annotation
Download the Flickr30k dataset, unzip it under the
datasets
folder, and accordingly modify theimage_root
in config. Download all-in-one annotations from Google Drive or Baidu Drive, unzip it under theannotation
folder, and accordingly modify theannotation
in config. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, and accordingly modify the--pretrained
of the scripts. For example, to evaluate a 2x compressed model:python -m torch.distributed.run --nproc_per_node=8 compress_retrieval_flickr.py --evaluate \ --pretrained output/retrieval_flickr_compression_2x/model_base_retrieval_flickr_2x_compressed.pth \ --config ./configs/retrieval_flickr.yaml \ --output_dir output/retrieval_flickr_compression_2x
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, and accordingly modify thepretrained
in config. For example, to conduct a 2x compression on 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 compress_retrieval_flickr.py --p 0.5 --epoch 12 \ --pretrained pretrained/model_base_retrieval_flickr.pth \ --config ./configs/retrieval_flickr.yaml \ --output_dir output/retrieval_flickr_compression_2x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 2x Google/Baidu Link Google/Baidu Google/Baidu Link 4x Google/Baidu Link Google/Baidu Google/Baidu Link
-
Dataset & Annotation
Download the COCO dataset, unzip it under the
datasets
folder, and accordingly modify theimage_root
in config. Download all-in-one annotations from Google Drive or Baidu Drive, unzip it under theannotation
folder, and accordingly modify theannotation
in config. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, and accordingly modify the--pretrained
of the scripts. For example, to evaluate a 2x compressed model:python -m torch.distributed.run --nproc_per_node=8 compress_retrieval_clip.py --evaluate \ --pretrained output/retrieval_coco_clip_compression_2x/clip_large_retrieval_coco_2x_compressed.pth \ --config ./configs/retrieval_coco_clip.yaml \ --output_dir output/retrieval_coco_clip_compression_2x
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, and accordingly modify thepretrained
in config. For example, to conduct a 2x compression on 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 compress_retrieval_clip.py --p 0.5 --epoch 6 \ --pretrained pretrained/clip_large_retrieval_coco.pth \ --config ./configs/retrieval_coco_clip.yaml \ --output_dir output/retrieval_coco_clip_compression_2x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 2x Google/Baidu Link Google/Baidu Google/Baidu Link 4x Google/Baidu Link Google/Baidu Google/Baidu Link
-
Dataset & Annotation
Download the Flickr30k dataset, unzip it under the
datasets
folder, and accordingly modify theimage_root
in config. Download all-in-one annotations from Google Drive or Baidu Drive, unzip it under theannotation
folder, and accordingly modify theannotation
in config. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, and accordingly modify the--pretrained
of the scripts. For example, to evaluate a 2x compressed model:python -m torch.distributed.run --nproc_per_node=8 compress_retrieval_clip.py --evaluate \ --pretrained output/retrieval_flickr_clip_compression_2x/clip_large_retrieval_flickr_2x_compressed.pth \ --config ./configs/retrieval_flickr_clip.yaml \ --output_dir output/retrieval_flickr_clip_compression_2x
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, and accordingly modify thepretrained
in config. For example, to conduct a 2x compression on 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 compress_retrieval_clip.py --p 0.5 --epoch 12 \ --pretrained pretrained/clip_large_retrieval_flickr.pth \ --config ./configs/retrieval_flickr_clip.yaml \ --output_dir output/retrieval_flickr_clip_compression_2x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 2x Google/Baidu Link Google/Baidu Google/Baidu Link 4x Google/Baidu Link Google/Baidu Google/Baidu Link
-
Dataset & Annotation
Download the ImageNet dataset, unzip it under the
datasets
folder, and accordingly modify the option--data-path
in compression and evaluation scripts. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, and accordingly modify the option--resume
of the scripts. For example, to evaluate a 50% compressed model:python -m torch.distributed.run --nproc_per_node=8 compress_deit.py --eval --dist-eval \ --data-path datasets/vision/imagenet \ --model deit_small_patch16_224 \ --resume output/train_deit_small_patch16_224_60s_300r_050x/deit_small_patch16_224_050x_compressed.pth
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, and accordingly modify the option--finetune
of the scripts. For example, to conduct a 50% compression on 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 compress_deit.py \ --data-path datasets/vision/imagenet \ --finetune pretrained/deit_small_patch16_224-cd65a155.pth \ --model deit_small_patch16_224 \ --epochs-search 60 \ --epochs 300 \ --batch-size 512 \ --lr-search 1e-4 \ --lr 1e-4 \ --warmup-epochs 0 \ --p 0.5 \ --interval 800 \ --output_dir output/train_deit_small_patch16_224_60s_300r_050x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 10% Google/Baidu Link Google/Baidu Google/Baidu Link 20% Google/Baidu Link Google/Baidu Google/Baidu Link 30% Google/Baidu Link Google/Baidu Google/Baidu Link 40% Google/Baidu Link Google/Baidu Google/Baidu Link 50% Google/Baidu Link Google/Baidu Google/Baidu Link
-
Dataset & Annotation
Download the Ade20k dataset, unzip it under the
datasets
folder, and accordingly modify the option--dataset
in compression and evaluation scripts. See here for expected folder structres. -
Evaluation
Download compressed checkpoints from the table below, put them under the
output
folder, accordingly modify the path option of the scripts, and export the folder of datasets as the environment variableDATASET
. For example, to evaluate a 30% compressed model:export DATASET=datasets/vision # for single-scale testing python -m torch.distributed.run --nproc_per_node=4 segm/eval/miou.py \ output/seg_small_mask_16s_64r_030x/seg_small_mask_030x_compressed.pth ade20k --singlescale # for multi-scale testing python -m torch.distributed.run --nproc_per_node=4 segm/eval/miou.py \ output/seg_small_mask_16s_64r_030x/seg_small_mask_030x_compressed.pth ade20k --multiscale
-
Compression
Download the uncompressed model from the table below, put it under the
pretrained
folder, accordingly modify the option--pretrained
of the scripts, and export the folder of datasets as the environment variableDATASET
. For example, to conduct a 30% compression on 4 A100 GPUs:export DATASET=datasets/vision python -m torch.distributed.run --nproc_per_node=4 segm/train.py --dataset ade20k \ --backbone vit_small_patch16_384 --decoder mask_transformer --no-resume \ --pretrained pretrained/seg_small_mask.pth \ --epochs-search 16 \ --epochs 64 \ --batch-size 64 \ --lr-search 4e-3 \ -lr 4e-3 \ --p 0.30 \ --interval 200 \ --log-dir output/seg_small_mask_16s_64r_030x
-
Download
Reduction Uncompressed Model Compression Script Training Log Compressed Checkpoint Evaluation Script 10% Google/Baidu Link Google/Baidu Google/Baidu Link 15% Google/Baidu Link Google/Baidu Google/Baidu Link 20% Google/Baidu Link Google/Baidu Google/Baidu Link 30% Google/Baidu Link Google/Baidu Google/Baidu Link
-
For BLIP and CLIP models, evaluate the 2x compressed BLIP model on the NLVR2 dataset as an example:
python compress_nlvr.py --evaluate \ --pretrained output/caption_coco_compression_2x/model_base_caption_capfilt_large_coco_2x_compressed.pth \ --config ./configs/caption_coco.yaml \ --output_dir output/caption_coco_compression_2x
-
For DeiT, evaluate the 50% compressed model on the ImageNet dataset as an example:
[!Note] Note that without the option
---dist-eval
python compress_deit.py --eval \ --data-path datasets/vision/imagenet \ --model deit_small_patch16_224 \ --resume output/train_deit_small_patch16_224_60s_300r_050x/deit_small_patch16_224_050x_compressed.pth
-
For Segmenter, evaluate the 30% compressed model on the ADE20k dataset as an example:
export DATASET=datasets/vision # for single-scale testing python segm/eval/miou.py \ output/seg_small_mask_16s_64r_030x/seg_small_mask_030x_compressed.pth ade20k --singlescale # for multi-scale testing python segm/eval/miou.py \ output/seg_small_mask_16s_64r_030x/seg_small_mask_030x_compressed.pth ade20k --multiscale
-
For BLIP and CLIP models, compress the BLIP model to half on the NLVR2 dataset as an example:
python compress_nlvr.py --p 0.5 --epoch 15 \ --pretrained pretrained/model_base_nlvr.pth \ --config ./configs/nlvr.yaml \ --output_dir output/nlvr_nlvr2_compression_2x
-
For DeiT, conduct a 50% compression on the ImageNet dataset as an example:
python compress_deit.py \ --data-path datasets/vision/imagenet \ --finetune pretrained/deit_small_patch16_224-cd65a155.pth \ --model deit_small_patch16_224 \ --epochs-search 60 \ --epochs 300 \ --batch-size 512 \ --lr-search 1e-4 \ --lr 1e-4 \ --warmup-epochs 0 \ --p 0.5 \ --interval 800 \ --output_dir output/train_deit_small_patch16_224_60s_300r_050x
-
For Segmenter, conduct a 30% compression on the Ade20k dataset as an example:
export DATASET=datasets/vision python segm/train.py --dataset ade20k \ --backbone vit_small_patch16_384 --decoder mask_transformer --no-resume \ --pretrained pretrained/seg_small_mask.pth \ --epochs-search 16 \ --epochs 64 \ --batch-size 64 \ --lr-search 4e-3 \ -lr 4e-3 \ --p 0.30 \ --interval 200 \ --log-dir output/seg_small_mask_16s_64r_030x
- For BLIP and CLIP models, change the
batch_size_test
(or thebatch_size
for the Image Caption task) in the corresponding config file to a smaller number. - For DeiT, modify the option
--batch-size
of the scripts to a smaller number. - For Segmenter, the default batch size of the evaluation is
1
. For the single-scale testing, the peak of used GPU memory on a single card is less than 5G, which should be able to run on most types of GPUs. For the multi-scale testing, the peak of used GPU memory on a single card is about 13G, which may require a GPU with relatively larger memory.
-
For BLIP and CLIP models, change the
batch_size_train
andbatch_size_test
(or thebatch_size
for the Image Caption task) in the corresponding config file to a smaller number. Besides, the option--amp
for compression scripts can be used to enable mixed precision. Compress the BLIP model to half on the NLVR2 dataset as an example:python -m torch.distributed.run --nproc_per_node=8 compress_nlvr.py --p 0.5 --epoch 15 --amp \ --pretrained pretrained/model_base_nlvr.pth \ --config ./configs/nlvr.yaml \ --output_dir output/nlvr_nlvr2_compression_2x
[!WARNING]
Note that using mixed precision may produce nan gradients. Since UPop take gradients as metrics to determine pruned positions, nan gradients may disrupt the determination and degrade the performance. -
For DeiT and Segmenter, modify the option
--batch-size
of the scripts to a smaller number. Mixed precision is not supported temporarily, as it frequently causes nan gradients.
βββ annotation
βΒ Β βββ answer_list.json
βΒ Β βββ coco_gt
βΒ Β βΒ Β βββ coco_karpathy_test_gt.json
βΒ Β βΒ Β βββ coco_karpathy_val_gt.json
βΒ Β βββ ...
βββ clip
βββ compress_caption.py
βββ compress_deit.py
βββ compress_nlvr.py
βββ compress ...
βββ configs
βββ data
βββ datasets
βΒ Β βββ vision
βΒ Β βββ coco
βΒ Β βββ flickr
βΒ Β βββ NLVR2
βΒ Β βββ ...
βββ deit
βββ log
βββ models
βββ output
βββ pretrained
β βββ bert-base-uncased
β βββ clip_large_retrieval_coco.pth
β βββ clip_large_retrieval_flickr.pth
β βββ ...
βββ segm
βββ transform
βββ utils.py
This code is built upon BLIP, CLIP, DeiT, Segmenter, and timm. Thanks for these awesome open-source projects!
If you find our work or this code useful, please consider citing the corresponding paper:
@InProceedings{pmlr-v202-shi23e,
title = {{UP}op: Unified and Progressive Pruning for Compressing Vision-Language Transformers},
author = {Shi, Dachuan and Tao, Chaofan and Jin, Ying and Yang, Zhendong and Yuan, Chun and Wang, Jiaqi},
booktitle = {Proceedings of the 40th International Conference on Machine Learning},
pages = {31292--31311},
year = {2023},
volume = {202},
publisher = {PMLR}
}