For a simple proof of concept implementation visit
Accompanying blog post coming soon!
DeepFool: a simple and accurate method to fool deep neural networks, S. Moosavi-Dezfooli et al., CVPR 2016
The Limitations of Deep Learning in Adversarial Settings, N. Papernot et al., ESSP 2016
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, N. Papernot et al., arxiv 2016
Adversarial Examples In The Physical World, A. Kurakin et al., ICLR workshop 2017
Delving into Transferable Adversarial Examples and Black-box Attacks Liu et al., ICLR 2017
Towards Evaluating the Robustness of Neural Networks N. Carlini et al., SSP 2017
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, N. Papernot et al., Asia CCS 2017
Adversarial attacks on neural network policies, S. Huang et al, ICLR workshop 2017
Tactics of Adversarial Attacks on Deep Reinforcement Learning Agents, Y. Lin et al, IJCAI 2017
Delving into adversarial attacks on deep policies, J. Kos et al., ICLR workshop 2017