This repository is the official implementation of our paper accepted at WDCS@NeurIPS2020.
Table of Contents:
- Requirements
- Reproduce paper results
- Datasets, pre-trained defense models and attack parameters
- Citing this work
- Authors
To begin with, please make sure your system has these installed:
- Python 3.6.8
- CUDA 10.1
- cuDNN 7.6.4
Then, install all required Python dependencies with the command:
pip install -r requirements.txt
By downloading the dataset (see here), you will have the following directory tree:
./data
amazon_men/
original/
images/
0.jpg
1.jpg
...
amazon_women/
original/
images/
0.jpg
1.jpg
...
Here we describe the steps to reproduce the results presented in the paper. Here we explain in depth some of the script input parameters we use.
First of all, classify all images from one of the three datasets and extract high-level features by running:
python classify_extract.py \
--dataset <dataset_name> \
--defense 0 \
--gpu <gpu_id>
If you want to classify images with a defended model (i.e., Adversarial Training or Free Adversarial Training), run the following
python classify_extract.py \
--dataset <dataset_name> \
--gpu <gpu_id> \
--defense 1 \
--model_dir <model_name> \
--model_file <model_filename>
This will produce classes.csv
and features.npy
, which is a N X 2048
float32 array corresponding to the extracted features for all N
images. The two files are saved to ./data/<dataset_name>/original/
when no defense is applied, otherwise they are saved to ./data/<dataset_name>/<model_name>_original/
.
After this initial step, run the following command to train one of the available recommender models based on the extracted visual features:
python rec_generator.py \
--dataset <dataset_name> \
--gpu <gpu_id> \
--experiment_name <full_experiment_name> \
--epoch <num_training_epochs> \
--verbose <show_results_each_n_epochs> \
--topk 150
The recommeder models will be stored in ./rec_model_weights/<dataset_name>/
and the top-150 recommendation lists for each users will be saved in ./rec_results/<dataset_name>/
.
Extract the proposed rank-based metrics (CHR@K and nCDCG@K) you can execute the following command:
python evaluate_rec.py \
--dataset <dataset_name> \
--metric <ncdcg or chr> \
--experiment_name <full_experiment_name> \
-- origin <original_class_id> \
--topk 150 \
--analyzed_k <metric_k>
Results will be stored in ./chr/<dataset_name>/
and ./ncdcg/<dataset_name>/
in .tsv
format. At this point, you can select from the extracted category-based metrics the origin-target pair of ids to execute the explored VAR attack scenario.
Based upon the produced recommendation lists, choose an origin and a target class for each dataset. Then, run one of the available targeted attacks:
python classify_extract_attack.py \
--dataset <dataset_name> \
--attack_type <attack_name> \
[ATTACK_PARAMETERS] \
--origin_class <zero_indexed_origin_class> \
--target_class <zero_indexed_target_class> \
--defense 0 \
--gpu <gpu_id>
If you want to run an attack on a defended model, this is the command to run:
python classify_extract_attack.py \
--dataset <dataset_name> \
--attack_type <attack_name> \
[ATTACK_PARAMETERS] \
--origin_class <zero_indexed_origin_class> \
--target_class <zero_indexed_target_class> \
--defense 1 \
--model_dir <model_name> \
--model_file <model_filename> \
--gpu <gpu_id>
This will produce (i) all attacked images, saved in tiff
format to ./data/<dataset_name>/<full_experiment_name>/images/
and (ii) classes.csv
and features.npy
.
Generate the recommendation lists for the produced visual attacks as specified in Recommendations generation.
In order to generate the attack Success Rate (SR) for each attack/defense combination, run the following script:
python -u evaluate_attack.py [SAME PARAMETERS SEEN FOR classify_extract_attack.py]
this will produce the text file ./data/<dataset_name>/<full_experiment_name>/success_results.txt
, which contains the average SR results.
Then, to generate the Feature Loss (FL) for each attack/defense combination, run the following script:
python -u feature_loss.py [SAME PARAMETERS SEEN FOR classify_extract_attack.py]
this will generate the text file ./data/<dataset_name>/full_experiment_name>/features_dist_avg_all_attack.txt
with the average FL results, and the csv file ./data/<dataset_name>/<full_experiment_name>/features_dist_all_attack.csv
with the FL results for each attacked image.
Finally, to evaluate the Learned Perceptual Image Patch Similarity (LPIPS) between attacked and original images, please refer to Zhang et al. and the official GitHub repository. Among all proposed combinations, we decided to fine-tune a VGG network since it is the best one at imitating a real human-evaluation in circumstances comparable to visual attacks.
# Possible values for 'dataset', 'defense', 'model_dir', 'model_file'
--dataset: {
amazon_men
amazon_women
}
--defense : {
0 # non-defended model
1 # defended model
}
--model_dir : {
madry # free adversarial training
free_adv # free adversarial training
}
--model_file : {
imagenet_linf_4.pt # adversarial training model filename
model_best.pth.tar # free adversarial_training model filename
}
----------------------------------------------------------------------------------
# [ATTACK_PARAMETERS]
--eps # epsilon
--l # L norm
# All other attack parameters are hard-coded. Can be set as input parameters, too.
Dataset | k-cores | # Users | # Products | # Feedback |
---|---|---|---|---|
Amazon Men | 5 | 24,379 | 7,371 | 89,020 |
Amazon Women | 10 | 16,668 | 2,981 | 54,473 |
Click here to download datasets.
Model name | Description | Link |
---|---|---|
Adversarial Training (Madry et al.) |
|
click here |
Free Adversarial Training (Shafai et al.) |
|
click here |
All attacks are implemented with CleverHans.
Attack name | Parameters |
---|---|
FGSM (Goodfellow et al.) |
|
PGD (Madry et al.) |
|
C & W (Carlini and Wagner) |
|
If you use this work for academic research, you are encouraged to cite our paper:
@Article{ADMM20,
author = {Vito Walter Anelli and Tommaso {Di Noia} and Daniele Malitesta and Felice Antonio Merra},
title = "Assessing Perceptual and Recommendation Mutation
of Adversarially-Poisoned Visual Recommenders",
journal = "The 1st Workshop on Dataset Curation and Security
co-located with the 34th Conference on Neural
InformationProcessing Systems (NeurIPS 2020),
Vancouver, Canada (Virtual Event).",
month = "dec",
year = "2020",
note = "Code and Datasets:
https://github.com/sisinflab/Perceptual-Rec-Mutati
on-of-Adv-VRs",
key = "Adversarial Machine Learning, Recommender Systems, Deep Learning",
keywords = "Adversarial Machine Learning, Recommender Systems,
Deep Learning",
url = "http://sisinflab.poliba.it/publications/2020/ADMM2
0"
}
The authors are in alphabetical order. Corresponding authors are in bold.
- Vito Walter Anelli (vitowalter.anelli@poliba.it)
- Tommaso Di Noia (tommaso.dinoia@poliba.it)
- Daniele Malitesta (daniele.malitesta@poliba.it)
- Felice Antonio Merra (felice.merra@poliba.it)