Skip to content

[ACL 2024 (Findings)] ICC: Quantifying Image Caption Concreteness for Multimodal Dataset Curation

License

Notifications You must be signed in to change notification settings

moranyanuka/icc_code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ICC: Quantifying Image Caption Concreteness
for Multimodal Dataset Curation

ACL 2024 (Findings)

Moran Yanuka, Morris Alper, Hadar Averbuch-Elor, Raja Giryes

Running the ICC model with HuggingFace 🤗

We release the model in HuggingFace here.

ICC model can be run with a few lines of code using HuggingFace:

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("moranyanuka/icc")
model = AutoModelForSequenceClassification.from_pretrained("moranyanuka/icc").to("cuda")

captions = ["a great method of quantifying concreteness", "a man with a white shirt"]
text_ids = tokenizer(captions, padding=True, return_tensors="pt", truncation=True).to("cuda")
with torch.inference_mode():
  icc_scores = model(**text_ids)["logits"]

# tensor([[0.0339], [1.0068]])

If you just want to filter your multimodal dataset, you can stop here. Otherwise, follow the below instructions.

Concreteness Annotations Dataset

The annotated captions, along with their corresponding ICC scores, can be found here

Setup

Clone Project

git clone https://github.com/moranyanuka/icc_code.git
cd icc_code

Create the Environment

To set up our environment, please run:

conda env create -f environment.yml
conda activate icc

SBA

The SBA code is highly based on LLaVA codebase, we thank the authors for their great repo!

SBA Training

Run the following command:

torchrun --nnodes 1 --nproc_per_node 1 --master_port 25000 sba/train/train_mem.py 
         --model_name_or_path meta-llama/Llama-2-7b-hf 
         --text_tower openai/clip-vit-large-patch14 
         --evaluation_strategy no 
         --tune_mm_mlp_adapter True 
         --mm_text_select_layer -2 
         --train_data_path data/cc3m_concept_balanced_train.csv
         --output_dir <path-to-model-output-dir>
         --num_train_epochs 2
         --per_device_train_batch_size 32 
         --per_device_eval_batch_size 64 
         --gradient_accumulation_steps 4 
         --evaluation_strategy no 
         --save_strategy steps 
         --save_steps 500 
         --save_total_limit 1 
         --bf16 True 
         --learning_rate 2e-3 
         --weight_decay 0. 
         --warmup_ratio 0.03 
         --lr_scheduler_type cosine 
         --logging_steps 1 
         --tf32 True 
         --model_max_length 2048 
         --gradient_checkpointing True 
         --lazy_preprocess True 
         --cache_dir <path-to-hf-cache_dir (defaults to None)>
         --report_to wandb

In case you don't have sufficient GPU memory, try decreasing the batch size, and increasing the gradient_accumulation_steps.

SBA inference

To generate the reconstructed caption through the trained SBA, run:

python sba/eval/sba_inference.py
       --model_path <path-to-trained-sba-model>
       --cache_dir <path-to-hf-cache_dir (default is None)>
       --num_beams <num-beams>
       --batch_size <batch-size>
       --output-path data/cc3m_concept_balanced_test_with_sba_predictions.csv

This will generate a new file, with the reconstructed SBA captions, and the corresponding edit-distance between the reconstructions and original captions.

VBA

We provide an example script for generating the reconstructed captions through the VBA with BLIP2 captioner and stable diffusion 2 as the text-to-image model.

Simply run:

python vba/vba_inference.py
       --batch_size <batch-size>
       --cache_dir <path-to-hf-cache_dir (default is None)>
       --num_beams <num-beams>
       --output-path data/cc3m_concept_balanced_test_with_vba_predictions.csv

This will generate a file in output_path with the VBA reconstructions.

About

[ACL 2024 (Findings)] ICC: Quantifying Image Caption Concreteness for Multimodal Dataset Curation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages