This repository provides dataset splits and code for Paper:
Nirat Saini, Khoi Pham, Abhinav Shrivastava
We provide compositional splits for Generalized CZSL, following prior works:
The dataset and splits can be downloaded from: VAW-CZSL. This folder has a jupyter notebook vaw_dataset_orig.ipynb
, and folder named compositional-split-natural
. The folder also has metadata
file which splits image ids for each split.
compositional-split-natural
: lists attribute-object pairs for each split [training, validation and testing]. Images folfder has all relevant images used in VAW-CZSL dataset.vaw_dataset_orig.ipynb
explains the steps for creation of splits, more details can also be found in Supplementary material. This file build the dataset splits from scratch.
For building split files and metedata files from scratch, you need
- The VAW-dataset from the website: VAW.
- Some images are part of Visual Genome, and can be downloaded from the official website.
Pre-requisites:
- Update the path for dataset images and log file in the config/*.yml files.
- Download and dump the pre-trained models from here to a folder named
saved_models
To run OADis for MIT-States Dataset:
Training:
python train.py --cfg config/mit-states.yml
Testing:
python test.py --cfg config/mit-states.yml --load mit_final.pth
Similar instructions can be used for other datasets: UT-Zappos and VAW-CZSL. The code works well, and is tested for:
Pytorch - 1.6.0+cu92
Python - 3.6.12
tensorboardx - v2.4
For more qualitative results and details, refer to the Project Page
For questions and queries, feel free to reach out to Nirat.
Please cite our CVPR 2022 paper if you use the this repo for OADis.
@InProceedings{Saini_2022_CVPR,
author = {Saini, Nirat and Pham, Khoi and Shrivastava, Abhinav},
title = {Disentangling Visual Embeddings for Attributes and Objects},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {13658-13667}
}