parent
21f7723f7e
commit
330625b63d
|
@ -63,6 +63,9 @@
|
|||
- Matt Fredrikson, Somesh Jha, and Thomas Ristenpart.
|
||||
[*Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures*](https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf).
|
||||
CCS 2015.
|
||||
- Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow.
|
||||
[*Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples*](https://arxiv.org/abs/1605.07277).
|
||||
arXiv 2016.
|
||||
- Nicholas Carlini and David Wagner.
|
||||
[*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf).
|
||||
S&P 2017.
|
||||
|
|
|
@ -10,7 +10,7 @@
|
|||
9/13 | Differentially private machine learning <br> **Reading:** [*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022) <br> **Reading:** [*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755) | Robert/Shengwen | Zach/Jialu |
|
||||
| <center> <h4> **Adversarial Machine Learning** </h4> </center> | |
|
||||
9/16 | Overview and basic concepts | JH | --- |
|
||||
9/18 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572) <br> **Reading:** [*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf) | JH | Robert/Shengwen |
|
||||
9/18 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572) <br> **Reading:** [*Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples*](https://arxiv.org/pdf/1605.07277.pdf) | JH | Robert/Shengwen |
|
||||
9/20 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) <br> **Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | Somya/Zi | Miru/Pierre |
|
||||
9/23 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | JH | --- |
|
||||
9/25 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | Joseph/Nils | Siddhant/Goutham |
|
||||
|
|
Reference in New Issue