This README provides instructions for setting up, training, and testing the High-Fidelity Face Age Transformation via Hierarchical Encoding and Contrastive Learning (HECL), a state-of-the-art model for age transformation in images. The guide assumes operation on a Linux environment and recommends the use of a GPU for execution.
- Create a Conda Environment:
Use the
environment.ymlfile provided in the repository to create a conda environment by running the following command:conda env create -f environment.yml
-
Download Datasets: Ensure to download the following datasets and place them in the directory where HECL is located(not inside the HECL directory):
- FFHQ-Aging-Dataset: GitHub Link
- Cross-Age-Face Dataset: GitHub Link
- All-Age-Faces-Dataset: Google Drive Link
Unzip All-Ages-Faces-Dataset/results/cropped_imgs.zip.
-
Preprocess the Datasets: Execute the Python scripts below to preprocess the datasets. This step is crucial for preparing the data for training:
python3 datasets/create_dataset.py python3 datasets/create_dataset_caf.py python3 datasets/create_dataset_allagesdataset.py
Alternatively, you can use the provided shell script to automate this process:
sh create_dataset.sh
First, turn on visdom on another terminal. This will open visdom server on localhost:8097 by default.
visdomTo train the model, modify the --dataroot, --name parameters in train.sh script according to your needs, then run:
sh train.shDownload checkpoints.zip Google Drive Link, unzip and put the checkpoints folder in the HECL directory. The inference can be done using pretrained models.
For inference, modify the --dataroot, --name, --which_epoch, and --checkpoint_dir parameters in test.sh script according to your needs, then run:
sh test.shEnsure that you have correctly placed the datasets in the required directory before preprocessing.
Modify the script parameters carefully to match your local setup for successful training and testing.