5.4 KiB
5.4 KiB
Calendar (Tentative)
For differential privacy, we will use the textbook Algorithmic Foundations of Data Privacy (AFDP) by Cynthia Dwork and Aaron Roth, available here.
Date | Topic | Presenter |
---|---|---|
Differential Privacy |
||
9/5 | Course welcome, introducing differential privacy Paper: Keshav. How to Read a Paper. |
Justin |
9/10 | Basic private mechanisms Reading: AFDP 3.2, 3.3 |
Justin |
9/12 | Composition and closure properties Reading: AFDP 3.5 |
Justin |
9/17 | What does differential privacy actually mean? Reading: McSherry. Lunchtime for Differential Privacy (see also these two posts) |
Justin |
9/19 | Exponential mechanism Paper: McSherry and Talwar. Mechanism Design via Differential Privacy. Due: Project topics and groups |
Justin |
9/21 (FRI) | Identity-Based Encryption from the Diffie-Hellman Assumption SPECIAL TIME AND PLACE: 4 PM, CS 1240 |
Sanjam Garg |
9/24 | Advanced mechanisms Report-noisy-max, Sparse Vector Technique, and Private Multiplicative Weights Reading: AFDP 3.3, 3.5, 4.2 |
Justin |
9/26 | Privacy for data streams Paper: Chan, Shi, and Song. Private and Continual Release of Statistics. |
Yinglun |
10/1 | Local differential privacy Paper: Erlingsson, Pihur, and Korolova. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. |
Justin |
Adversarial Machine Learning |
||
10/3 | AML: overview and basics GUEST LECTURE |
Somesh Jha |
10/8 | History of Adversarial ML Paper: Biggio and Roli. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning. |
Meghana |
10/10 | Adversarial examples Paper: Szegedy, Zaremba, Sutskever, et al. Intriguing Properties of Neural Networks. |
Shimaa |
10/15 | NO CLASS: INSTRUCTOR AWAY | |
10/17 | NO CLASS: INSTRUCTOR AWAY Due: Milestone 1 |
|
10/22 | Adversarial examples Paper: Goodfellow, Schlens, and Szegedy. Explaining and Harnessing Adversarial Examples. |
Kyrie |
10/24 | Real-world attacks Paper: Eykholt, Evtimov, Fernandes, et al. Robust Physical-World Attacks on Deep Learning Models. |
Hiba |
10/29 | Detection methods Paper: Carlini and Wagner. Towards Evaluating the Robustness of Neural Networks. |
Yiqin |
10/31 | Detection methods Paper: Carlini and Wagner. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. |
Junxiong |
11/5 | Defensive measures Paper: Steinhardt, Koh, and Liang. Certified Defenses for Data Poisoning Attacks. |
Yaman |
11/7 | Defensive measures Paper: Madry, Makelov, Schmidt, Schmidt, Tsipras, and Valdu. Towards Deep Learning Models Resistant to Adversarial Attacks. |
Maddie |
Cryptographic Techniques |
||
11/12 | Applied crypto: overview and basics | Justin |
11/14 | Verifiable differential privacy Paper: Narayan, Feldman, Papadimitriou, and Haeberlen. Verifiable Differential Privacy. Due: Milestone 2 |
Fayi |
11/19 | Homomorphic encryption Paper: Halevi and Shoup. Algorithms in HElib. |
Yue |
Language-Based Security |
||
11/21 | Language-based security: overview and basics | Justin |
11/26 | Languages for privacy Paper: Reed and Pierce. Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy. |
Sam |
11/28 | TBA | |
12/3 | Languages for authenticated datastructures Paper: Miller, Hicks, Katz, and Shi. Authenticated Data Structures, Generically. |
Zichuan |
12/5 | Languages for oblivous computing Paper: Zahur and Evans. Obliv-C: A Language for Extensible Data-Oblivious Computation. |
Zhiyi |
12/10 | Languages for information flow Paper: Griffin, Levy, Stefan, et al. Hails: Protecting Data Privacy in Untrusted Web Applications. |
Arjun |
12/12 | Languages for preventing timing channels Paper: Zhang, Askarov, and Myers. Language-Based Control and Mitigation of Timing Channels. |
Yan |