From one of the first papers on Adversarial examples - Explaining and Harnessing Adversarial Examples,
The direction of perturbation, rather than the specific point in space, matters most. Space is not full of pockets of adversarial examples that finely tile the reals like the rational numbers.
This project examines this idea by testing the robustness of a DNN to randomly generated perturbations.
$ python3 explore_space.py --img images/horse.png
This code adds to the input image (img
) a randomly generated perturbation (vec1
) which is subjected to a max norm constraint eps
. This adversarial image lies on a hypercube centerd around the original image. To explore a region (a hypersphere) around the adversarial image (img + vec1
), we add to it another perturbation (vec2
) which is constrained by L2 norm rad
.
Pressing keys e
and r
generates new vec1
and vec2
respectively.
The classifier is robust to these random perturbations even though they have severely degraded the image. Perturbations are clearly noticeable and have significantly higher max norm.
horse | automobile | : truck : |
In above images, there is no change in class labels and very small drops in probability.
A properly directed perturbation with max norm as low as 3, which is almost imperceptible, can fool the classifier.
horse | predicted - dog | perturbation (eps = 6) |