This repository has been archived on 2024-11-04. You can view files and clone it, but cannot push or open issues or pull requests.
cs763/website/docs/schedule/lectures.md

7.4 KiB

Calendar (tentative)

Date Topic Presenters Summarizers Notes

Differential Privacy

9/2 Course welcome
Reading: How to Read a Paper
Justin ---
9/4 Basic private mechanisms
Reading: Dwork and Roth 3.2-4
Justin ---
9/7 NO CLASS: LABOR DAY
9/9 Composition and closure properties
Reading: Dwork and Roth 3.5
Justin --- Signups Due
9/11 What does differential privacy actually mean?
Reading: Lunchtime for Differential Privacy
Justin ---
9/14 Private machine learning
Reading: On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches
Reading: Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
--- ---
9/16 Privately generating synthetic data
Reading: A Simple and Practical Algorithm for Differentially Private Data Release
Reading: Private Post-GAN Boosting
--- ---

Adversarial Machine Learning

9/18 Overview and basic concepts Justin ---
9/21 Adversarial examples
Reading: Intriguing Properties of Neural Networks
Reading: Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
See also: Explaining and Harnessing Adversarial Examples
--- ---
9/23 Data poisoning
Reading: Poisoning Attacks against Support Vector Machines
Reading: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
--- ---
9/25 Defenses and detection: challenges
Reading: Towards Evaluating the Robustness of Neural Networks
Reading: Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Justin ---
9/28 Certified defenses
Reading: Certified Defenses for Data Poisoning Attacks
Reading: Certified Defenses against Adversarial Examples
--- ---
9/30 Adversarial training
Reading: Towards Deep Learning Models Resistant to Adversarial Attacks
See also: Ensemble Adversarial Training: Attacks and Defenses
--- ---

Applied Cryptography

10/2 Overview and basic constructions
Reading: Boneh and Shoup, 11.6, 19.4
See also: Evans, Kolesnikov, and Rosulek, Chapter 3
Justin ---
10/5 Secure data collection at scale
Reading: Prio: Private, Robust, and Scalable Computation of Aggregate Statistics
--- ---
10/7 Verifiable computing
Reading: SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud
--- ---
10/9 Side channels and implementation issues
Reading: On Significance of the Least Significant Bits For Differential Privacy
--- ---
10/12 Model watermarking
Reading: Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
See also: Protecting Intellectual Property of Deep Neural Networks with Watermarking
--- --- MS1 Due

Algorithmic Fairness

10/14 Overview and basic notions
Reading: Barocas, Hardt, and Narayanan, Chapter 1-2
See also: 50 Years of Test (Un)fairness: Lessons for Machine Learning
Justin ---
10/16 Individual and group fairness
Reading: Fairness through Awarness
Reading: Equality of Opportunity in Supervised Learning
--- ---
10/19 Inherent tradeoffs
Reading: Inherent Trade-Offs in the Fair Determination of Risk Scores
--- ---
10/21 Fairness and causality
Reading: Barocas, Hardt, and Narayanan, Chapter 4
Justin ---
10/23 Fairness in unsupervised learning
Reading: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
See also: Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
--- ---
10/26 Testing fairness, empirically
Reading: Barocas, Hardt, and Narayanan, Chapter 5
Justin ---

PL and Verification

10/28 Overview and basic notions Justin ---
10/30 Probabilistic programming languages
Reading: Probabilistic Programming
--- ---
11/2 Verifying probabilistic programs
Reading: A Program Logic for Union Bounds
See also: Advances and Challenges of Probabilistic Model Checking
--- ---
11/4 Languages for differential privacy
Reading: Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy
See also: Programming Language Techniques for Differential Privacy
--- ---
11/6 Verifying neural networks
Reading: AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
See also: DL2: Training and Querying Neural Networks with Logic
--- --- MS2 Due

No Lectures: Work on Projects

12/7 Project Presentations
12/9 Project Presentations
12/11 PROJECTS DUE Projects Due