Abhijay Ghildyal, Feng Liu. In TMLR, 2023. (Featured Certification)
In this study, we systematically examine the robustness of both traditional and learned perceptual similarity metrics to imperceptible adversarial perturbations.
Figure (above):
to all perceptual similarity metrics and humans. We attack
such that the metric (
in the above sample,
Figure (above): An example of the PGD attack on LPIPS(Alex)
Requires Python 3+ and PyTorch 0.4+. For evaluation, please download the data from the links below.
When starting this project, I used the requirements.txt
(link) from the LPIPS repository (link). We are grateful to the authors of various perceptual similarity metrics for making their code and data publicly accessible.
The transferable adversarial attack samples generated for our benchmark in Table 5 can be downloaded from this google drive folder (link). Please unzip transferableAdvSamples.zip
in the datasets/
folder.
Alternatively, you can use the following:
cd datasets
gdown 1gA7lD7FtvssQoMQwaaGS_6E3vPkSf66T # get <id> from google drive (see below)
unzip transferableAdvSamples.zip
In case the gdown id changes, you can obtain it from the 'shareable with anyone' link for transferableAdvSamples.zip
file in the aforementioned Google Drive folder. The id will be a substring in the shareable link, as shown here: https://drive.google.com/file/d/<id>/view?usp=share_link
.
Download the LPIPS repo (link), outside this folder. Then, download the BAPPS dataset as mentioned here: link.
Use the following to benchmark various metrics on the transferable adversarial samples created by attacking LPIPS(Alex) on BAPPS dataset samples via stAdv and PGD.
# L2
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric l2 --save l2
# SSIM
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric ssim --save ssim
# ST-LPIPS(Alex)
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric stlpipsAlex --save stlpipsAlex
The results will be stored in the results/transferableAdv_benchmark/
folder.
Finally, use the ipython notebook results/study_results_transferableAdv_attack.ipynb
to calculate the number of flips.
The following steps were performed to create the transferable adversarial samples for our benchmark.
- Create adversarial samples by attacking LPIPS(Alex) via the spatial attack stAdv.
CUDA_VISIBLE_DEVICES=0 python create_transferable_stAdv_samples.py
-
We perform a visual inspection of the samples before proceeding and weed out some of the samples that do not meet our criteria of imperceptibility.
-
Using the samples selected in step 2, we attack LPIPS(Alex) via
$\ell_\infty$ -bounded PGD with different max iterations.
CUDA_VISIBLE_DEVICES=0 python create_transferable_PGD_samples.py
- Finally, we combine the stAdv and PGD attacks by attacking the samples created via stAdv.
CUDA_VISIBLE_DEVICES=0 python create_transferable_stAdvPGD_samples.py
We hope the above code is able to assist and inspire additional studies to test the robustness of perceptual similarity metrics through more extensive benchmarks using various datasets and stronger adversarial attacks.
To perform the whitebox PGD attack run the following
CUDA_VISIBLE_DEVICES=0 python whitebox_attack_pgd.py --metric lpipsAlex --save lpipsAlex --load_size 64
The results are saved in results/whitebox_attack/
.
Finally, use the ipython notebook results/study_results_whitebox_attack.ipynb
to calculate the number of flips and other stats.
We provide code to perform the reverse of our attack (see Appendix F), i.e., we attack the less similar of the two distorted images to make it more similar to the reference image.
CUDA_VISIBLE_DEVICES=0 python whitebox_toMakeMoreSimilar_attack_pgd.py --metric lpipsAlex --save lpipsAlex --load_size 64
To add. Code for FGSM attack, and Benchmark on PIEAPP dataset.
If you find this repository useful for your research, please use the following to cite our work:
@article{ghildyal2023attackPercepMetrics,
title={Attacking Perceptual Similarity Metrics},
author={Abhijay Ghildyal and Feng Liu},
journal={Transactions on Machine Learning Research},
year={2023}
}