From dfd44f6dab5e5540bac6bc80b3c2df332597a6d9 Mon Sep 17 00:00:00 2001 From: Justin Hsu Date: Wed, 1 Aug 2018 01:35:04 -0400 Subject: [PATCH] Add AML readings. --- website/docs/resources/readings.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/website/docs/resources/readings.md b/website/docs/resources/readings.md index 3d4a427..6e21b87 100644 --- a/website/docs/resources/readings.md +++ b/website/docs/resources/readings.md @@ -45,6 +45,23 @@ The Spi Calculus*. Information and Computation, 1999. ### Adversarial Machine Learning +- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru + Erhan, Ian Goodfellow, and Rob Fergus. [*Intriguing properties of neural + networks*](https://arxiv.org/pdf/1312.6199.pdf). ICLR 2014. +- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. [*Explaining and + Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572). ICLR 2015. +- Nicholas Carlini and David Wagner. [*Towards Evaluating the Robustness of + Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf). S&P 2017. +- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei + Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. [*Robust Physical-World + Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf). CVPR 2018. +- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and + Adrian Vladu. [*Towards Deep Learning Models Resistant to Adversarial + Attacks*](https://arxiv.org/pdf/1706.06083.pdf). ICLR 2018. +- Nicholas Carlini and David Wagner. [*Adversarial Examples Are Not Easily Detected: + Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf). AISec 2017. +- Jacob Steinhardt, Pang Wei Koh, and Percy Liang. [*Certified Defenses for Data + Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf). NIPS 2017. # Supplemental Material - Cynthia Dwork and Aaron Roth. *Algorithmic Foundations of Data Privacy*.