From 330625b63d9d030fec4c74d00a126e3c9df363d1 Mon Sep 17 00:00:00 2001 From: Justin Hsu Date: Sun, 22 Sep 2019 23:21:26 -0500 Subject: [PATCH] Add transferability reading. Looks interesting. --- website/docs/resources/readings.md | 3 +++ website/docs/schedule/lectures.md | 2 +- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/website/docs/resources/readings.md b/website/docs/resources/readings.md index 151c852..fd7eb8a 100644 --- a/website/docs/resources/readings.md +++ b/website/docs/resources/readings.md @@ -63,6 +63,9 @@ - Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. [*Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures*](https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf). CCS 2015. +- Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. + [*Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples*](https://arxiv.org/abs/1605.07277). + arXiv 2016. - Nicholas Carlini and David Wagner. [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf). S&P 2017. diff --git a/website/docs/schedule/lectures.md b/website/docs/schedule/lectures.md index 4fd4a57..5fbad7d 100644 --- a/website/docs/schedule/lectures.md +++ b/website/docs/schedule/lectures.md @@ -10,7 +10,7 @@ 9/13 | Differentially private machine learning
**Reading:** [*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022)
**Reading:** [*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755) | Robert/Shengwen | Zach/Jialu | |

**Adversarial Machine Learning**

| | 9/16 | Overview and basic concepts | JH | --- | -9/18 | Adversarial examples
**Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf)
**Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572)
**Reading:** [*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf) | JH | Robert/Shengwen | +9/18 | Adversarial examples
**Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf)
**Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572)
**Reading:** [*Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples*](https://arxiv.org/pdf/1605.07277.pdf) | JH | Robert/Shengwen | 9/20 | Data poisoning
**Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389)
**Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | Somya/Zi | Miru/Pierre | 9/23 | Defenses and detection: challenges
**Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf)
**Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | JH | --- | 9/25 | Certified defenses
**Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf)
**Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | Joseph/Nils | Siddhant/Goutham |