ACL 2024 (Findings)
Moran Yanuka, Morris Alper, Hadar Averbuch-Elor, Raja Giryes
We release the model in HuggingFace here.
ICC model can be run with a few lines of code using HuggingFace:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("moranyanuka/icc")
model = AutoModelForSequenceClassification.from_pretrained("moranyanuka/icc").to("cuda")
captions = ["a great method of quantifying concreteness", "a man with a white shirt"]
text_ids = tokenizer(captions, padding=True, return_tensors="pt", truncation=True).to("cuda")
with torch.inference_mode():
icc_scores = model(**text_ids)["logits"]
# tensor([[0.0339], [1.0068]])
If you just want to filter your multimodal dataset, you can stop here. Otherwise, follow the below instructions.
The annotated captions, along with their corresponding ICC scores, can be found here
git clone https://github.com/moranyanuka/icc_code.git
cd icc_code
To set up our environment, please run:
conda env create -f environment.yml
conda activate icc
The SBA code is highly based on LLaVA codebase, we thank the authors for their great repo!
Run the following command:
torchrun --nnodes 1 --nproc_per_node 1 --master_port 25000 sba/train/train_mem.py
--model_name_or_path meta-llama/Llama-2-7b-hf
--text_tower openai/clip-vit-large-patch14
--evaluation_strategy no
--tune_mm_mlp_adapter True
--mm_text_select_layer -2
--train_data_path data/cc3m_concept_balanced_train.csv
--output_dir <path-to-model-output-dir>
--num_train_epochs 2
--per_device_train_batch_size 32
--per_device_eval_batch_size 64
--gradient_accumulation_steps 4
--evaluation_strategy no
--save_strategy steps
--save_steps 500
--save_total_limit 1
--bf16 True
--learning_rate 2e-3
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type cosine
--logging_steps 1
--tf32 True
--model_max_length 2048
--gradient_checkpointing True
--lazy_preprocess True
--cache_dir <path-to-hf-cache_dir (defaults to None)>
--report_to wandb
In case you don't have sufficient GPU memory, try decreasing the batch size, and increasing the gradient_accumulation_steps.
To generate the reconstructed caption through the trained SBA, run:
python sba/eval/sba_inference.py
--model_path <path-to-trained-sba-model>
--cache_dir <path-to-hf-cache_dir (default is None)>
--num_beams <num-beams>
--batch_size <batch-size>
--output-path data/cc3m_concept_balanced_test_with_sba_predictions.csv
This will generate a new file, with the reconstructed SBA captions, and the corresponding edit-distance between the reconstructions and original captions.
We provide an example script for generating the reconstructed captions through the VBA with BLIP2 captioner and stable diffusion 2 as the text-to-image model.
Simply run:
python vba/vba_inference.py
--batch_size <batch-size>
--cache_dir <path-to-hf-cache_dir (default is None)>
--num_beams <num-beams>
--output-path data/cc3m_concept_balanced_test_with_vba_predictions.csv
This will generate a file in output_path with the VBA reconstructions.