ML models' reconstruction attacks and adversarial examples
This is a list of resources related to the research and development about ml models' reconstruction attacks and adversarial examples
- Stealing Hyperparameters in Machine Learning, Wang et al. (2018)
- Machine Learning Models that Remember Too Much, Song et al. (2017)
- Towards Measuring Membership Privacy, Long et al. (2017)
- Membership Inference Attacks Against Machine Learning Models, Shokri et al. (2017)
- Towards the Science of Security and Privacy in Machine Learning, Papernot et al. (2016)
- Stealing Machine Learning Models via Prediction APIs, Tramèr et al. (2016)
- The Space of Transferable Adversarial Examples, Tramèr et al. (2017)
- Ensemble Adversarial Training: Attacks and Defenses, Tramèr et al. (2017)
- Adversarial Perturbations Against Deep Neural Networks for Malware Classification, Grosse et al. (2017)
- The limitations of deep learning in adversarial settings, Papernot et al. (2016)
- Crafting adversarial input sequences for recurrent neural networks, Papernot et al. (2016)
- Practical Black-Box Attacks against Machine Learning, Papernot et al. (2016)