Skip to content

Latest commit

 

History

History
 
 

adversarial_robustness

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Adversarial Robustness

This repository contains the code needed to evaluate models trained in Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples

Contents

We have released our top-performing models in two formats compatible with JAX and PyTorch. This repository also contains our model definitions.

Running the example code

Downloading a model

Download a model from links listed in the following table. Clean and robust accuracies are measured on the full test set. The robust accuracy is measured using AutoAttack.

dataset norm radius architecture extra data clean robust link
CIFAR-10 8 / 255 WRN-70-16 91.10% 65.88% jax, pt
CIFAR-10 8 / 255 WRN-28-10 89.48% 62.80% jax, pt
CIFAR-10 8 / 255 WRN-70-16 85.29% 57.20% jax, pt
CIFAR-10 8 / 255 WRN-34-20 85.64% 56.86% jax, pt
CIFAR-10 2 128 / 255 WRN-70-16 94.74% 80.53% jax, pt
CIFAR-10 2 128 / 255 WRN-70-16 90.90% 74.50% jax, pt
CIFAR-100 8 / 255 WRN-70-16 69.15% 36.88% jax, pt
CIFAR-100 8 / 255 WRN-70-16 60.86% 30.03% jax, pt
MNIST 0.3 WRN-28-10 99.26% 96.34% jax, pt

Using the model

Once downloaded, a model can be evaluated (clean accuracy) by running the eval.py script in either the jax or pytorch folders. E.g.:

cd jax
python3 eval.py \
  --ckpt=${PATH_TO_CHECKPOINT} --depth=70 --width=16 --dataset=cifar10

Citing this work

If you use this code or these models in your work, please cite the accompanying paper:

@article{gowal2020uncovering,
  title={Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples},
  author={Gowal, Sven and Qin, Chongli and Uesato, Jonathan and Mann, Timothy and Kohli, Pushmeet},
  journal={arXiv preprint arXiv:2010.03593},
  year={2020},
  url={https://arxiv.org/pdf/2010.03593}
}

Disclaimer

This is not an official Google product.