Skip to content

💀 A collection of methods to fool the deep neural network 💀

Notifications You must be signed in to change notification settings

layumi/Awesome-Fools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 

Repository files navigation

Awesome Fools Awesome

A curated list of adversarial samples. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers, and awesome-architecture-search.

Contributing:

Please feel free to pull requests or open an issue to add papers.

Papers:

  1. Adversarial examples in the physical world (ICLR2017 Workshop)

  2. DeepFool: a simple and accurate method to fool deep neural networks (CVPR2016) The idea in this work is close to the orginal idea. Loop until the predicted label change.

  3. Learning with a strong adversary (rejected by ICLR2016?) Apply the spirit of GAN to optimization.

  4. Decision-based Adversarial Attacks: Reliable Attacks Against Black-box Machine Learning Models (ICLR2018) [code]

  5. The limitations of deep learning in adversarial settings (ESSP) (European Symposium on Security & Privacy) Propose SaliencyMapAttack. Do not use loss function.

  6. Generating Natural Adversarial Examples (ICLR2018)

  7. Simple Black-Box Adversarial Perturbations for Deep Networksh One pixel attack (CVPR17 Workshop)

  8. Boosting Adversarial Attacks with Momentum (CVPR2018 Spotlight)

  9. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition (CCS2016) same with the least-likely class

  10. Adversarial examples for semantic image segmentation (ICLR2017 Workshop) same with the classification case.

  11. Explaining and Harnessing Adversarial Examples (ICLR2015) Fast Gradient Sign Method

  12. U-turn: Crafting Adversarial Queries with Opposite-direction Features Attack Image Retrieval [Code] (IJCV2022)

  13. Ensemble Adversarial Training: Attacks and Defenses (ICLR2018)

  14. Adversarial Manipulation of Deep Representations (ICLR2016) Attack the intermediate activation.

  15. Query-efficient Meta Attack to Deep Neural Networks (ICLR2019) Attack the image model based on meta learning.

  16. Sparse adversarial perturbations for videos (AAAI2019) Focus on sparse adversarial perturbations for videos.

  17. Black-box adversarial attacks on video recognition models (MultiMedia2019) Attack video model in blac-box setting.

  18. Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior (ECCV2020) Attack the video via direct application of motion map.

To Read:

  1. Exploring the space of adversarial images (IJCNN2016)

  2. Towards Deep Learning Models Resistant to Adversarial Attacks (ICLR2018)

  3. Stochastic Activation Pruning for Robust Adversarial Defense (ICLR2018)

  4. Mitigating Adversarial Effects Through Randomization (ICLR2018)

  5. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (ICLR2018) [Github]

  6. Adversarial Examples Are Not Bugs, They Are Features (NeurIPS 2019)

Talks

  1. Ian Speech on CS231n

Blogs

  1. https://bair.berkeley.edu/blog/2017/12/30/yolo-attack/

Competition

  1. MCS2018: https://github.com/Atmyre/MCS2018_Solution

Releases

No releases published

Sponsor this project

Packages

No packages published