This is the official repository of "Flareon: Stealthy Backdoor Injection via Poisoned Augmentation."
- Install required python packages:
python -m pip install -r requirements.py
Training commands are as follows.
- Any-to-any:
python train.py \
--dataset <dataset name> \
--attack_ratio <ratio> \
--aug <augment> \
--s <beta>
- Adaptive any-to-any:
python train_learn.py \
--dataset <dataset name> \
--attack_ratio <ratio> \
--aug <augment> \
--s <beta> \
--warmup_epochs <epochs> \
--eps <constraint>
- Any-to-one:
python train.py \
--dataset <dataset name> \
--attack_choice any2one \
--attack_ratio <ratio> \
--aug <augment> \
--s <beta>
- Adaptive any-to-one:
python train_learn.py \
--dataset <dataset name> \
--attack_choice any2one \
--attack_ratio <ratio> \
--aug <augment> \
--s <beta> \
--warmup_epochs <epochs> \
--eps <constraint>
The parameter choices for the above commands are as follows:
- Dataset
<dataset name>
:cifar10
,celeba
,tinyimagenet
. - Poisoned proportion per batch
<ratio>
:0
~100
- Choice of augmentation
<augment>
:autoaug
,randaug
- Trigger initialization
<beta>
:1
,2
,4
,...
- Warmup epochs
<epochs>
:0
~10
- Learned trigger constraint boundary
<constraint>
:0.1
(for CIFAR-10),0.01
(for CelebA),0.2
(for t-ImageNet)
The trained checkpoints will be saved at checkpoints/
.
To evaluate trained models, run command:
python test.py \
--dataset <dataset name> \
--attack_ratio <ratio> \
--attack_choice any2any \
--s <beta>
python test.py \
--dataset <dataset name> \
--attack_ratio <ratio> \
--attack_choice any2one \
--s <beta>