This repository has been archived on 2024-11-04. You can view files and clone it, but cannot push or open issues or pull requests.
cs763/website/docs/schedule/lectures.md

5.5 KiB

Calendar (Tentative)

For differential privacy, we will use the textbook Algorithmic Foundations of Data Privacy (AFDP) by Cynthia Dwork and Aaron Roth, available here.

Date Topic Presenter

Differential Privacy

9/5 Course welcome, introducing differential privacy
Paper: Keshav. How to Read a Paper.
JH
9/10 Basic private mechanisms
Reading: AFDP 3.2, 3.3
JH
9/12 Composition and closure properties
Reading: AFDP 3.5
JH
9/17 What does differential privacy actually mean?
Reading: McSherry. Lunchtime for Differential Privacy (see also these two posts)
JH
9/19 Exponential mechanism
Paper: McSherry and Talwar. Mechanism Design via Differential Privacy.
Due: Project topics and groups
JH
9/21 (FRI) Identity-Based Encryption from the Diffie-Hellman Assumption
SPECIAL TIME AND PLACE: 4 PM, CS 1240
Sanjam Garg
9/24 Report-noisy-max and the Sparse Vector Technique
Reading: AFDP 3.3, 3.5
JH
9/26 Privacy for data streams
Paper: Chan, Shi, and Song. Private and Continual Release of Statistics.
Yinglun
10/1 Local differential privacy
Paper: Erlingsson, Pihur, and Korolova. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response.
JH

Adversarial Machine Learning

10/3 AML: overview and basics
GUEST LECTURE
Somesh Jha
10/8 History of Adversarial ML
Paper: Biggio and Roli. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning.
Meghana
10/10 Adversarial examples
Paper: Szegedy, Zaremba, Sutskever, et al. Intriguing Properties of Neural Networks.
Shimaa
10/15 NO CLASS: INSTRUCTOR AWAY
10/17 NO CLASS: INSTRUCTOR AWAY
Due: Milestone 1
10/22 Adversarial examples
Paper: Goodfellow, Schlens, and Szegedy. Explaining and Harnessing Adversarial Examples.
Kyrie
10/24 Real-world attacks
Paper: Eykholt, Evtimov, Fernandes, et al. Robust Physical-World Attacks on Deep Learning Models.
Hiba
10/29 Detection methods
Paper: Carlini and Wagner. Towards Evaluating the Robustness of Neural Networks.
Yiqin
10/31 Detection methods
Paper: Carlini and Wagner. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods.
Junxiong
11/5 Defensive measures
Paper: Steinhardt, Koh, and Liang. Certified Defenses for Data Poisoning Attacks.
Yaman
11/7 Defensive measures
Paper: Madry, Makelov, Schmidt, Schmidt, Tsipras, and Valdu. Towards Deep Learning Models Resistant to Adversarial Attacks.
Maddy

Cryptographic Techniques

11/12 Applied crypto: overview and basics JH
11/14 Verifiable computing
Paper: Braun, Feldman, Ren, et al. Verifying Computations with State.
Due: Milestone 2
Kan
11/19 Verifiable differential privacy
Paper: Narayan, Feldman, Papadimitriou, and Haeberlen. Verifiable Differential Privacy.
Fayi
11/21 Homomorphic encryption
Paper: Ducas and Micciancio. FHEW: Bootstrapping Homomorphic Encryption in Less than a Second.
Yue

Language-Based Security

11/26 Language-based security: overview and basics JH
11/28 Languages for privacy
Paper: Reed and Pierce. Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy.
Sam
12/3 Languages for authenticated datastructures
Paper: Miller, Hicks, Katz, and Shi. Authenticated Data Structures, Generically.
Zichuan
12/5 Languages for oblivous computing
Paper: Zahur and Evans. Obliv-C: A Language for Extensible Data-Oblivious Computation.
Zhiyi
12/10 Languages for information flow
Paper: Griffin, Levy, Stefan, et al. Hails: Protecting Data Privacy in Untrusted Web Applications.
Arjun
12/12 Languages for preventing timing channels
Paper: Zhang, Askarov, and Myers. Language-Based Control and Mitigation of Timing Channels.
Yan