cs763/website/docs/schedule/lectures.md

44 lines
8.6 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Calendar (tentative)
Date | Topic | Presenters | Summarizers | Notes
:----:|-------|:----------:|:-----------:|:-----:
| <center> <h4> **Differential Privacy** </h4> </center> | | |
9/2 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Reading:** [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) | Justin | --- | [[slides]](../resources/slides/lecture-welcome.html)
9/4 | Basic private mechanisms <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.2-4 | Justin | --- |
9/7 | <center> **NO CLASS: LABOR DAY** </center> | | |
9/9 | Composition and closure properties <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.5 | Justin | --- | [Signups](https://docs.google.com/spreadsheets/d/1Qiq6RtBiHD6x7t-wPqAykvTDdbbBvZYSMZ9FrKUHKm4/edit?usp=sharing) Due
9/11 | What does differential privacy actually mean? <br> **Reading:** [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) | Justin | --- |
9/14 | Private machine learning <br> **Reading:** [*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022) <br> **Reading:** [*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755) | Nathan/Matt T. | Saniya/Marcus |
9/16 | Privately generating synthetic data <br> **Reading:** [*A Simple and Practical Algorithm for Differentially Private Data Release*](https://papers.nips.cc/paper/4548-a-simple-and-practical-algorithm-for-differentially-private-data-release.pdf) <br> **Reading:** [*Private Post-GAN Boosting*](https://arxiv.org/pdf/2007.11934) | Zijian/Yuchen | Deepan/Kendall |
| <center> <h4> **Adversarial Machine Learning** </h4> </center> | |
9/18 | Overview and basic concepts | Justin | --- |
9/21 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples*](https://arxiv.org/pdf/1605.07277) <br> **See also:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572) | Deepan and Kendall | Keaton/Anna |
9/23 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) <br> **Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | Grishma/Lokit | Amos/Suleman |
9/25 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | Justin | --- |
9/28 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | Yucheng/Matt W. | Roger/Zifan |
9/30 | Adversarial training <br> **Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) <br> **See also:** [*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204) | Nikhil/Scott | Grishma/Lokit |
| <center> <h4> **Applied Cryptography** </h4> </center> | | |
10/2 | Overview and basic constructions <br> **Reading:** [Boneh and Shoup](http://toc.cryptobook.us/), 11.6, 19.4 <br> **See also:** [Evans, Kolesnikov, and Rosulek](https://securecomputation.org/), Chapter 3 | Justin | --- |
10/5 | Secure data collection at scale <br> **Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) | Saniya/Marcus | Jinwoo/Mazharul |
10/7 | Verifiable computing <br> **Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | Mike | Siyang/Dan |
10/9 | Side channels and implementation issues <br> **Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | Siyang/Dan | Nathan/Matt T. |
10/12 | Model watermarking <br> **Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) <br> **See also:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf) | Amos/Suleman | Sidharth/Martin | MS1 Due
| <center> <h4> **Algorithmic Fairness** </h4> </center> | | |
10/14 | Overview and basic notions <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/index.html), Chapter 1-2 <br> **See also:** [*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104) | Justin | --- |
10/16 | Individual and group fairness <br> **Reading:** [*Fairness through Awarness*](https://arxiv.org/pdf/1104.3913) <br> **Reading:** [*Equality of Opportunity in Supervised Learning*](https://arxiv.org/pdf/1610.02413) | Sidharth/Martin | Vishal/Nikita |
10/19 | Inherent tradeoffs <br> **Reading:** [*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807) | Shiyu/Rita | Rishabh/Aaron |
10/21 | Fairness and causality <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/causal.html), Chapter 4 | Justin | --- |
10/23 | Fairness in unsupervised learning <br> **Reading:** [*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520) <br> **See also:** [*Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints*](https://arxiv.org/pdf/1707.09457) | Keaton/Anna | Shiyu/Rita |
10/26 | Testing fairness, empirically <br> **Reading:** [*Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination*](https://arxiv.org/pdf/1408.6491.pdf) <br> **Reading:** [*Discrimination through optimization: How Facebooks ad delivery can lead to skewed outcomes*](https://arxiv.org/pdf/1904.02095.pdf) <br> **See also:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/testing.html), Chapter 5 | Rishabh/Aaron | Mike |
| <center> <h4> **PL and Verification** </h4> </center> | | |
10/28 | Overview and basic notions | Justin | --- |
10/30 | Probabilistic programming languages <br> **Reading:** [*Probabilistic Programming*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/fose-icse2014.pdf) | Vishal/Nikita | Zijian/Yuchen |
11/2 | Verifying probabilistic programs <br> **Reading:** [*A Program Logic for Union Bounds*](https://arxiv.org/pdf/1602.05681) <br> **See also:** [*Advances and Challenges of Probabilistic Model Checking*](https://www.prismmodelchecker.org/papers/allerton10.pdf) | Jinwoo/Mazharul | Yucheng/Matt W. |
11/4 | Languages for differential privacy <br> **Reading:** [*Privacy Integrated Queries*](https://www.microsoft.com/en-us/research/wp-content/uploads/2009/06/sigmod115-mcsherry.pdf) <br> **See also:** [*Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy*](https://www.cis.upenn.edu/~bcpierce/papers/dp.pdf) <br> **See also:** [*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf) | Ashish/Athena | Nikhil/Scott |
11/6 | Verifying neural networks <br> **Reading:** [*AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation*](https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf) <br> **See also:** [*DL2: Training and Querying Neural Networks with Logic*](http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf) | Roger/Zifan | Ashish/Athena | MS2 Due
| <center> <h4> **No Lectures: Work on Projects** </h4> </center> | | |
12/4 | <center> **Project Presentations** </center> <br> Grishma, Sidharth, Lokit <br> Saniya, Margaret, Kendall <br> Mike, Zichen, Dong <br> Mazharul <br> Deepan, Siyang <br> Aaron | | |
12/7 | <center> **Project Presentations** </center> <br> Amos, Suleman, Rita <br> Vishal, Nikita, Dan <br> Zijian, Yuchen <br> Ashish, Athena <br> Roger, Zifan | | |
12/9 | <center> **Project Presentations** </center> <br> Anna, Keaton, Shiyu <br> Nathan <br> Jinwoo <br> Martin <br> Nikhil, Scott <br> Rishabh, Matt, Yucheng | | |
12/11 | <center> **PROJECTS DUE** </center> | | | Projects Due