This is the source code of paper "Text-Enhanced Graph Attention Hashing for Cross-Modal Retrieval".
We have uploaded the complete source code and the generated hash codes to the repository for testing. The test.zip archive contains the test scripts we provided for evaluation purposes..
- python 3.11
- pytorch 2.1.0
- ...
Device: NVIDIA RTX-3090Ti GPU with 128 GB RAM
You should generate the following *.mat
file for each dataset. The structure of directory ./data
should be:
dataset
├── coco
│ ├── caption.mat
│ ├── index.mat
│ └── label.mat
├── flickr25k
│ ├── caption.mat
│ ├── index.mat
│ └── label.mat
└── nuswide
├── caption.mat
├── index.mat
└── label.mat
Please preprocessing dataset to appropriate input format.
After preparing the python environment, and dataset, we can train the TEGAH model.
unzip test.zip
python test.py