From c89bd96d73e82f8a3a6d82089a33ff7348e88a5f Mon Sep 17 00:00:00 2001 From: Justin Hsu Date: Wed, 4 Sep 2019 12:52:44 -0500 Subject: [PATCH] Add a few more papers. --- website/docs/resources/readings.md | 12 +++++++++--- website/docs/schedule/lectures.md | 6 +++--- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/website/docs/resources/readings.md b/website/docs/resources/readings.md index 6c8f1e8..151c852 100644 --- a/website/docs/resources/readings.md +++ b/website/docs/resources/readings.md @@ -84,12 +84,18 @@ - Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344). ICLR 2018. -- Vitaly Feldman. - [*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271). - arXiv 2019. +- Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. + [*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204). + ICLR 2018. +- Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. + [*Poison Frogs! Targeted Clean-Label PoisoningAttacks on Neural Networks*](https://arxiv.org/pdf/1804.00792). + NeurIPS 2019. - Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. [*The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks*](https://arxiv.org/pdf/1802.08232). USENIX 2019. +- Vitaly Feldman. + [*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271). + arXiv 2019. ### Applied Cryptography - Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish. diff --git a/website/docs/schedule/lectures.md b/website/docs/schedule/lectures.md index 438f7ad..8c7706a 100644 --- a/website/docs/schedule/lectures.md +++ b/website/docs/schedule/lectures.md @@ -11,14 +11,14 @@ |

**Adversarial Machine Learning**

| | 9/16 | Overview and basic concepts | JH | - | 9/18 | Adversarial examples
**Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf)
**Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572)
**Reading:** [*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf) | | | -9/20 | Data poisoning
**Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) | | | +9/20 | Data poisoning
**Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389)
**Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | | | 9/23 | Defenses and detection: challenges
**Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf)
**Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | JH | - | 9/25 | Certified defenses
**Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf)
**Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | | | -9/27 | Adversarial training
**Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) | | | +9/27 | Adversarial training
**Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf)
**Reading:** [*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204) | | | |

**Applied Cryptography**

| | | 9/30 | Overview and basic constructions | JH | - | 10/2 | SMC for machine learning
**Reading:** [*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329)
**Reading:** [*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212) | | | -10/4 | Secure data collection at scale
**Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) | | | +10/4 | Secure data collection at scale
**Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf)
**Reading:** [*RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response*](https://arxiv.org/pdf/1407.6981.pdf) | | | 10/7 | Verifiable computing
**Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | JH | - | 10/9 | Side channels and implementation issues
**Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | | | 10/11 | Model watermarking
**Reading:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf)
**Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) | | | MS1 Due