8.6 KiB
8.6 KiB
Calendar (tentative)
Date | Topic | Presenters | Summarizers | Notes |
---|---|---|---|---|
Differential Privacy |
||||
9/2 | Course welcome Reading: How to Read a Paper |
Justin | --- | [slides] |
9/4 | Basic private mechanisms Reading: Dwork and Roth 3.2-4 |
Justin | --- | |
9/7 | NO CLASS: LABOR DAY | |||
9/9 | Composition and closure properties Reading: Dwork and Roth 3.5 |
Justin | --- | Signups Due |
9/11 | What does differential privacy actually mean? Reading: Lunchtime for Differential Privacy |
Justin | --- | |
9/14 | Private machine learning Reading: On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches Reading: Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data |
Nathan/Matt T. | Saniya/Marcus | |
9/16 | Privately generating synthetic data Reading: A Simple and Practical Algorithm for Differentially Private Data Release Reading: Private Post-GAN Boosting |
Zijian/Yuchen | Deepan/Kendall | |
Adversarial Machine Learning |
||||
9/18 | Overview and basic concepts | Justin | --- | |
9/21 | Adversarial examples Reading: Intriguing Properties of Neural Networks Reading: Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples See also: Explaining and Harnessing Adversarial Examples |
Deepan and Kendall | Keaton/Anna | |
9/23 | Data poisoning Reading: Poisoning Attacks against Support Vector Machines Reading: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks |
Grishma/Lokit | Amos/Suleman | |
9/25 | Defenses and detection: challenges Reading: Towards Evaluating the Robustness of Neural Networks Reading: Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods |
Justin | --- | |
9/28 | Certified defenses Reading: Certified Defenses for Data Poisoning Attacks Reading: Certified Defenses against Adversarial Examples |
Yucheng/Matt W. | Roger/Zifan | |
9/30 | Adversarial training Reading: Towards Deep Learning Models Resistant to Adversarial Attacks See also: Ensemble Adversarial Training: Attacks and Defenses |
Nikhil/Scott | Grishma/Lokit | |
Applied Cryptography |
||||
10/2 | Overview and basic constructions Reading: Boneh and Shoup, 11.6, 19.4 See also: Evans, Kolesnikov, and Rosulek, Chapter 3 |
Justin | --- | |
10/5 | Secure data collection at scale Reading: Prio: Private, Robust, and Scalable Computation of Aggregate Statistics |
Saniya/Marcus | Jinwoo/Mazharul | |
10/7 | Verifiable computing Reading: SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud |
Mike | Siyang/Dan | |
10/9 | Side channels and implementation issues Reading: On Significance of the Least Significant Bits For Differential Privacy |
Siyang/Dan | Nathan/Matt T. | |
10/12 | Model watermarking Reading: Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring See also: Protecting Intellectual Property of Deep Neural Networks with Watermarking |
Amos/Suleman | Sidharth/Martin | MS1 Due |
Algorithmic Fairness |
||||
10/14 | Overview and basic notions Reading: Barocas, Hardt, and Narayanan, Chapter 1-2 See also: 50 Years of Test (Un)fairness: Lessons for Machine Learning |
Justin | --- | |
10/16 | Individual and group fairness Reading: Fairness through Awarness Reading: Equality of Opportunity in Supervised Learning |
Sidharth/Martin | Vishal/Nikita | |
10/19 | Inherent tradeoffs Reading: Inherent Trade-Offs in the Fair Determination of Risk Scores |
Shiyu/Rita | Rishabh/Aaron | |
10/21 | Fairness and causality Reading: Barocas, Hardt, and Narayanan, Chapter 4 |
Justin | --- | |
10/23 | Fairness in unsupervised learning Reading: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings See also: Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints |
Keaton/Anna | Shiyu/Rita | |
10/26 | Testing fairness, empirically Reading: Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination Reading: Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes See also: Barocas, Hardt, and Narayanan, Chapter 5 |
Rishabh/Aaron | Mike | |
PL and Verification |
||||
10/28 | Overview and basic notions | Justin | --- | |
10/30 | Probabilistic programming languages Reading: Probabilistic Programming |
Vishal/Nikita | Zijian/Yuchen | |
11/2 | Verifying probabilistic programs Reading: A Program Logic for Union Bounds See also: Advances and Challenges of Probabilistic Model Checking |
Jinwoo/Mazharul | Yucheng/Matt W. | |
11/4 | Languages for differential privacy Reading: Privacy Integrated Queries See also: Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy See also: Programming Language Techniques for Differential Privacy |
Ashish/Athena | Nikhil/Scott | |
11/6 | Verifying neural networks Reading: AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation See also: DL2: Training and Querying Neural Networks with Logic |
Roger/Zifan | Ashish/Athena | MS2 Due |
No Lectures: Work on Projects |
||||
12/4 | Project Presentations Grishma, Sidharth, Lokit Saniya, Margaret, Kendall Mike, Zichen, Dong Mazharul Deepan, Siyang Aaron |
|||
12/7 | Project Presentations Amos, Suleman, Rita Vishal, Nikita, Dan Zijian, Yuchen Ashish, Athena Roger, Zifan |
|||
12/9 | Project Presentations Anna, Keaton, Shiyu Nathan Jinwoo Martin Nikhil, Scott Rishabh, Matt, Yucheng |
|||
12/11 | PROJECTS DUE | Projects Due |