Serkan Demirci & Alican Büyükçakır
-
Dalvi, N., Domingos, P., Sanghai, S., & Verma, D. (2004, August). Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 99-108). ACM. https://info.cs.uab.edu/zhang/Spam-mining-papers/Adversarial.Classification.pdf
-
Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., ... & Roli, F. (2013, September). Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases (pp. 387-402). Springer, Berlin, Heidelberg. https://arxiv.org/pdf/1708.06131.pdf
-
Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. https://arxiv.org/pdf/1412.6572.pdf
-
Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 427-436). https://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_047.pdf
-
Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. arXiv preprint. (CVPR 2017) http://openaccess.thecvf.com/content_cvpr_2017/papers/Moosavi-Dezfooli_Universal_Adversarial_Perturbations_CVPR_2017_paper.pdf
-
Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., & Webb, R. (2017, July). Learning from simulated and unsupervised images through adversarial training. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (Vol. 3, No. 4, p. 6). http://openaccess.thecvf.com/content_cvpr_2017/papers/Shrivastava_Learning_From_Simulated_CVPR_2017_paper.pdf
-
Elsayed, G. F., Shankar, S., Cheung, B., Papernot, N., Kurakin, A., Goodfellow, I., & Sohl-Dickstein, J. (2018). Adversarial Examples that Fool both Human and Computer Vision. arXiv preprint arXiv:1802.08195. https://arxiv.org/pdf/1802.08195.pdf
-
Tramèr, F., Kurakin, A., Papernot, N., Boneh, D., & McDaniel, P. (2017). Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204. https://arxiv.org/pdf/1705.07204.pdf
-
Miyato, T., Maeda, S. I., Koyama, M., Nakae, K., & Ishii, S. (2015). Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677. https://arxiv.org/pdf/1507.00677.pdf
-
Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. https://arxiv.org/abs/1607.02533.pdf
-
Yuan, X., He, P., Zhu, Q., Bhat, R. R., & Li, X. (2017). Adversarial Examples: Attacks and Defenses for Deep Learning. arXiv preprint arXiv:1712.07107. https://arxiv.org/pdf/1712.07107.pdf
-
Liu, Y., Chen, X., Liu, C., & Song, D. (2016). Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770. https://arxiv.org/abs/1611.02770
-
Grosse, K., Manoharan, P., Papernot, N., Backes, M., & McDaniel, P. (2017). On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280. https://arxiv.org/abs/1702.06280
-
Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2017). The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453. https://arxiv.org/abs/1704.03453
-
Papernot, N., Carlini, N., Goodfellow, I., Feinman, R., Faghri, F., Matyasko, A., ... & Garg, A. (2016). cleverhans v2. 0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768. https://arxiv.org/pdf/1610.00768.pdf
-
Huang, S., Papernot, N., Goodfellow, I., Duan, Y., & Abbeel, P. (2017). Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284. https://arxiv.org/pdf/1702.02284.pdf http://rll.berkeley.edu/adversarial/
-
Miyato, T., Maeda, S. I., Koyama, M., & Ishii, S. (2017). Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976. https://arxiv.org/pdf/1704.03976.pdf
-
Shaham, U., Yamada, Y., & Negahban, S. (2015). Understanding adversarial training: Increasing local stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432. https://arxiv.org/pdf/1511.05432.pdf