Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression
This repository provides the source codes for the experiments in our paper at here.
If you use this code, please cite
@article{ding2023distilling,
title={Distilling and transferring knowledge via cGAN-generated samples for image classification and regression},
author={Ding, Xin and Wang, Yongwei and Xu, Zuheng and Wang, Z Jane and Welch, William J},
journal={Expert Systems with Applications},
volume={213},
pages={119060},
year={2023},
publisher={Elsevier}
}
Evolution of fake samples' distributions and datasets.
- CIFAR-100
- ImageNet-100
- Steering Angle
- UTKFace
argparse>=1.1, h5py>=2.10.0, matplotlib>=3.2.1, numpy>=1.18.5, Pillow>=7.0.0, python=3.8.5, torch>=1.5.0, torchvision>=0.6.0, tqdm>=4.46.1
Download eval_and_gan_ckpts.7z
:
https://1drv.ms/u/s!Arj2pETbYnWQuqt036MJ2KdVMKRXAw?e=4mo5SI
Unzip eval_and_gan_ckpts.7z
you will get eval_and_gan_ckpts
. Then, put eval_and_gan_ckpts
under ./CIFAR-100
Download ImageNet_128x128_100Class.h5
from https://1drv.ms/u/s!Arj2pETbYnWQtoVvUvsC2xoh3swt4A?e=0PDCJo
Put this .h5 file at ./datasets/ImageNet-100
.
Download eval_and_gan_ckpts.7z
:
https://1drv.ms/u/s!Arj2pETbYnWQuqwFLFR8_cf7tWKqtQ?e=o1sPe9
Unzip eval_and_gan_ckpts.7z
you will get eval_and_gan_ckpts
. Then, put eval_and_gan_ckpts
under ./ImageNet-100
Download SteeringAngle_64x64_prop_0.8.h5
from https://1drv.ms/u/s!Arj2pETbYnWQudF7rY9aeP-Eis4_5Q?e=kBkS2P
Put this .h5 file at ./datasets/SteeringAngle
.
Download UTKFace_64x64_prop_0.8.h5
from https://1drv.ms/u/s!Arj2pETbYnWQucRHFHhZtG9P1iXxpw?e=B6tQsS
Put this .h5 file at ./datasets/UTKFace
.
Remember to correctly set all paths and other settings (e.g., TEACHER, STUDENT, and NFAKE) in .sh files!
The implementation of BigGAN is mainly based on [3,4].
Run ./CIFAR-100/BigGAN/scripts/launch_cifar100_ema.sh
.
Checkpoints of BigGAN used in our experiments are already in cGAN-KD_data_and_ckpts.7z
(see 2.1. CIFAR-100).
Run ./CIFAR-100/make_fake_datasets/scripts/run.sh
.
Run ./CIFAR-100/RepDistiller/scripts/vanilla/run_vanilla.sh
Run ./CIFAR-100/RepDistiller/scripts/distill/run_distill.sh
Run ./CIFAR-100/TAKD/scripts/distill/run_distill.sh
First, run ./CIFAR-100/SSKD/scripts/vanilla/run_vanilla.sh
.
Then, run ./CIFAR-100/SSKD/scripts/distill/run_distill.sh
Run ./CIFAR-100/ReviewKD/scripts/run_distill.sh
Run ./CIFAR-100/RepDistiller/scripts/vanilla/run_vanilla_fake.sh
For cGAN-KD + BLKD/FitNet/VID/RKD/CRD, run ./CIFAR-100/RepDistiller/scripts/distill/run_distill_fake.sh
For cGAN-KD + SSKD, run ./CIFAR-100/SSKD/scripts/distill/run_distill_fake.sh
The implementation of BigGAN is mainly based on [3,4].
Checkpoints of BigGAN used in our experiments are already in cGAN-KD_data_and_ckpts.7z
(see 2.2. ImageNet-100).
Run ./ImageNet-100/make_fake_datasets/scripts/run.sh
.
Run ./ImageNet-100/RepDistiller/scripts/vanilla/run_vanilla.sh
Run ./ImageNet-100/RepDistiller/scripts/distill/run_distill.sh
Run ./ImageNet-100/TAKD/scripts/distill/run_distill.sh
First, run ./ImageNet-100/SSKD/scripts/vanilla/run_vanilla.sh
.
Then, run ./ImageNet-100/SSKD/scripts/distill/run_distill.sh
Run ./ImageNet-100/ReviewKD/scripts/run_distill.sh
Run ./ImageNet-100/RepDistiller/scripts/vanilla/run_vanilla_fake.sh
For cGAN-KD + BLKD/FitNet/VID/RKD/CRD, run ./ImageNet-100/RepDistiller/scripts/distill/run_distill_fake.sh
For cGAN-KD + SSKD, run ./ImageNet-100/SSKD/scripts/distill/run_distill_fake.sh
The implementation of CcGAN is mainly based on [1,2].
Run ./SteeringAngle/scripts/run_gene_data.sh
.
Run ./SteeringAngle/scripts/run_cnn.sh
Run ./SteeringAngle/scripts/run_cnn_fake.sh
The implementation of CcGAN is mainly based on [1,2].
Run ./SteeringAngle/scripts/run_gene_data.sh
.
Run ./SteeringAngle/scripts/run_cnn.sh
Run ./SteeringAngle/scripts/run_cnn_fake.sh
[1] X. Ding, Y. Wang, Z. Xu, W. J. Welch, and Z. J. Wang, “CcGAN: Continuous conditional generative adversarial networks for image generation,” in International Conference on Learning Representations, 2021.
[2] X. Ding, Y. Wang, Z. Xu, W. J. Welch, and Z. J. Wang, “Continuous conditional generative adversarial networks for image generation: Novel losses and label input mechanisms,” arXiv preprint arXiv:2011.07466, 2020. https://github.com/UBCDingXin/improved_CcGAN
[3] https://github.com/ajbrock/BigGAN-PyTorch
[4] X. Ding, Y. Wang, Z. J. Wang, and W. J. Welch, "Efficient Subsampling of Realistic Images From GANs Conditional on a Class or a Continuous Variable." arXiv preprint arXiv:2103.11166v5 (2022). https://github.com/UBCDingXin/cDR-RS