Add a few more papers.
This commit is contained in:
parent
788d701947
commit
c89bd96d73
|
@ -84,12 +84,18 @@
|
||||||
- Aditi Raghunathan, Jacob Steinhardt, and Percy Liang.
|
- Aditi Raghunathan, Jacob Steinhardt, and Percy Liang.
|
||||||
[*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344).
|
[*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344).
|
||||||
ICLR 2018.
|
ICLR 2018.
|
||||||
- Vitaly Feldman.
|
- Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel.
|
||||||
[*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271).
|
[*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204).
|
||||||
arXiv 2019.
|
ICLR 2018.
|
||||||
|
- Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein.
|
||||||
|
[*Poison Frogs! Targeted Clean-Label PoisoningAttacks on Neural Networks*](https://arxiv.org/pdf/1804.00792).
|
||||||
|
NeurIPS 2019.
|
||||||
- Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song.
|
- Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song.
|
||||||
[*The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks*](https://arxiv.org/pdf/1802.08232).
|
[*The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks*](https://arxiv.org/pdf/1802.08232).
|
||||||
USENIX 2019.
|
USENIX 2019.
|
||||||
|
- Vitaly Feldman.
|
||||||
|
[*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271).
|
||||||
|
arXiv 2019.
|
||||||
|
|
||||||
### Applied Cryptography
|
### Applied Cryptography
|
||||||
- Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
|
- Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
|
||||||
|
|
|
@ -11,14 +11,14 @@
|
||||||
| <center> <h4> **Adversarial Machine Learning** </h4> </center> | |
|
| <center> <h4> **Adversarial Machine Learning** </h4> </center> | |
|
||||||
9/16 | Overview and basic concepts | JH | - |
|
9/16 | Overview and basic concepts | JH | - |
|
||||||
9/18 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572) <br> **Reading:** [*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf) | | |
|
9/18 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572) <br> **Reading:** [*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf) | | |
|
||||||
9/20 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) | | |
|
9/20 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) <br> **Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | | |
|
||||||
9/23 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | JH | - |
|
9/23 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | JH | - |
|
||||||
9/25 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | | |
|
9/25 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | | |
|
||||||
9/27 | Adversarial training <br> **Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) | | |
|
9/27 | Adversarial training <br> **Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) <br> **Reading:** [*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204) | | |
|
||||||
| <center> <h4> **Applied Cryptography** </h4> </center> | | |
|
| <center> <h4> **Applied Cryptography** </h4> </center> | | |
|
||||||
9/30 | Overview and basic constructions | JH | - |
|
9/30 | Overview and basic constructions | JH | - |
|
||||||
10/2 | SMC for machine learning <br> **Reading:** [*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329) <br> **Reading:** [*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212) | | |
|
10/2 | SMC for machine learning <br> **Reading:** [*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329) <br> **Reading:** [*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212) | | |
|
||||||
10/4 | Secure data collection at scale <br> **Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) | | |
|
10/4 | Secure data collection at scale <br> **Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) <br> **Reading:** [*RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response*](https://arxiv.org/pdf/1407.6981.pdf) | | |
|
||||||
10/7 | Verifiable computing <br> **Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | JH | - |
|
10/7 | Verifiable computing <br> **Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | JH | - |
|
||||||
10/9 | Side channels and implementation issues <br> **Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | | |
|
10/9 | Side channels and implementation issues <br> **Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | | |
|
||||||
10/11 | Model watermarking <br> **Reading:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf) <br> **Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) | | | MS1 Due
|
10/11 | Model watermarking <br> **Reading:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf) <br> **Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) | | | MS1 Due
|
||||||
|
|
Reference in New Issue