Skip to content

Keras reimplementation of "One pixel attack for fooling deep neural networks"

License

Notifications You must be signed in to change notification settings

SaiKiranBurle/one-pixel-attack

Repository files navigation

One Pixel attack

Recently there has been a lot of interest in fooling Neural Networks that were trained to classify on a particular dataset. Researchers have produced various methods that would perturb the natural image in way that the perturbation is imperceivable to the human eye, but disastrous to the correctness of the Neural Network in classifying the image.

This is an implementation of a recent paper which takes it to the extreme. How simple is it to fool a Neural network by changing just one pixel? It turns out, the current state-of-the-art Neural Networks are substantially vulnerable to this attack. There is a caveat. The adversary, who is trying to break the Neural Network should be given access to the black-box classification model for a fairly large number of experiments. In addition, the adversary needs to know the output probability distribution over the set of classes. These conditions are much weaker compared to the previous attacks where the adversary also has access to the gradients in the Neural Network.

The following project is a Keras reimplementation of "One pixel attack for fooling deep neural networks"

Example successful attacks

Targeted attack

Given an input image and an image classification model, the aim of a targeted attack is to maximize the probability label of the target class.

True class: Deer

Target class: Cat

Original Deer image Perturbed Deer image
Deer: 99.4% Cat: 52.49%

Non-targeted attack

Given an input image and an image classification model, the aim of a non-targeted attack is to minimize the probability label of the true class.

True class: Dog

Original dog image Perturbed dog image
Dog: 94.8% Dog: 7.8% (Bird: 90.8%)

Usage

Targeted attack

python targeted.py --config config.yaml --input images/deer.jpg --target cat

Non-targeted attack

python non_targeted.py --config config.yaml --input images/puppy.jpg

Conclusions

  • In my experiments, I found out that it is much easier to a fool a CIFAR-10 classifier than an ImageNet classifier. This is noted by the authors in the original paper as well.
  • Although the success rate is quite low, this experiment was a great learning experience and demonstrates the fragile nature of various Deep Learning based image classifiers.
  • Overall, I really liked the paper and enjoyed implementing for its simplicity and effectiveness conveying the point.

About

Keras reimplementation of "One pixel attack for fooling deep neural networks"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages