Update schedule for Fall 2020.

This commit is contained in:
Justin Hsu 2020-08-14 16:57:21 -05:00
parent ea37a2e57d
commit a433994eb2
3 changed files with 40 additions and 34 deletions

View File

@ -25,7 +25,7 @@ are no lectures scheduled in this period, I will be available to meet as needed.
Please use the mailing list if you want to contact the whole course: Please use the mailing list if you want to contact the whole course:
- <mailto:compsci763-1-f20@lists.wisc.edu> - <mailto:compsci763-1-f20@g-groups.wisc.edu>
All registered students should be on this list. If you are not registered but All registered students should be on this list. If you are not registered but
would like to follow along, please let me know and I will add you. would like to follow along, please let me know and I will add you.

View File

@ -101,7 +101,7 @@
USENIX 2019. USENIX 2019.
- Vitaly Feldman. - Vitaly Feldman.
[*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271). [*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271).
arXiv 2019. STOC 2020.
### Applied Cryptography ### Applied Cryptography
- Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish. - Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
@ -261,10 +261,15 @@
- Abhinav Verma, Hoang M. Le, Yisong Yue, and Swarat Chaudhuri. - Abhinav Verma, Hoang M. Le, Yisong Yue, and Swarat Chaudhuri.
[*Imitation-Projected Programmatic Reinforcement Learning*](https://arxiv.org/pdf/1907.05431). [*Imitation-Projected Programmatic Reinforcement Learning*](https://arxiv.org/pdf/1907.05431).
NeurIPS 2019. NeurIPS 2019.
- Kenneth L. McMillan
[*Bayesian Interpolants as Explanations for Neural Inferences*](https://arxiv.org/abs/2004.04198).
arXiv.
# Supplemental Material # Supplemental Material
- Cynthia Dwork and Aaron Roth. - Cynthia Dwork and Aaron Roth.
[*Algorithmic Foundations of Data Privacy*](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf). [*Algorithmic Foundations of Data Privacy*](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf).
- Solon Barocas, Moritz Hardt, and Arvind Narayanan.
[*Fairness and Machine Learning: Limitations and Opportunities*](https://fairmlbook.org/index.html).
- Gilles Barthe, Marco Gaboardi, Justin Hsu, and Benjamin C. Pierce. - Gilles Barthe, Marco Gaboardi, Justin Hsu, and Benjamin C. Pierce.
[*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf). [*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf).
- Michael Walfish and Andrew J. Blumberg. - Michael Walfish and Andrew J. Blumberg.

View File

@ -3,39 +3,40 @@
Date | Topic | Presenters | Summarizers | Notes Date | Topic | Presenters | Summarizers | Notes
:----:|-------|:----------:|:-----------:|:-----: :----:|-------|:----------:|:-----------:|:-----:
| <center> <h4> **Differential Privacy** </h4> </center> | | | | <center> <h4> **Differential Privacy** </h4> </center> | | |
9/4 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Reading:** [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) | JH | --- | 9/2 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Reading:** [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) | Justin | --- |
9/6 | Basic private mechanisms <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.2-4 | JH | --- | 9/4 | Basic private mechanisms <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.2-4 | Justin | --- |
9/9 | Composition and closure properties <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.5 | JH | --- | [Signups](https://docs.google.com/spreadsheets/d/1hSbRy0mo3PjlozN0Ph1JkP5JwlRG8y7ukuCdorofncA/edit?usp=sharing) Due 9/7 | <center> **NO CLASS: LABOR DAY** </center> | | |
9/11 | What does differential privacy actually mean? <br> **Reading:** [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) | JH | --- | 9/9 | Composition and closure properties <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.5 | Justin | --- | Signups Due
9/13 | Differentially private machine learning <br> **Reading:** [*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022) <br> **Reading:** [*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755) | Robert/Shengwen | Zach/Jialu | 9/11 | What does differential privacy actually mean? <br> **Reading:** [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) | Justin | --- |
9/14 | Differentially private machine learning <br> **Reading:** [*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022) <br> **Reading:** [*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755) | --- | --- |
9/16 | Privately generating synthetic data <br> **Reading:** [*Private Post-GAN Boosting*](https://arxiv.org/abs/2007.11934) <br> **See also:** [*A Simple and Practical Algorithm for Differentially Private Data Release*](https://papers.nips.cc/paper/4548-a-simple-and-practical-algorithm-for-differentially-private-data-release.pdf) | --- | --- |
| <center> <h4> **Adversarial Machine Learning** </h4> </center> | | | <center> <h4> **Adversarial Machine Learning** </h4> </center> | |
9/16 | Overview and basic concepts | JH | --- | 9/18 | Overview and basic concepts | Justin | --- |
9/18 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572) | JH | Robert/Shengwen | 9/21 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572) | --- | --- |
9/20 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) <br> **Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | Somya/Zi | Miru/Pierre | 9/23 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) <br> **Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | --- | --- |
9/23 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | JH | --- | 9/25 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | Justin | --- |
9/25 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | Joseph/Nils | Siddhant/Goutham | 9/28 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | --- | --- |
9/27 | Adversarial training <br> **Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) <br> **See also:** [*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204) | Siddhant/Goutham | Somya/Zi | 9/30 | Adversarial training <br> **Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) <br> **See also:** [*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204) | --- | --- |
| <center> <h4> **Applied Cryptography** </h4> </center> | | | | <center> <h4> **Applied Cryptography** </h4> </center> | | |
9/30 | Overview and basic constructions <br> **Reading:** [Boneh and Shoup](https://crypto.stanford.edu/~dabo/cryptobook/BonehShoup_0_4.pdf), 11.6, 19.4 <br> **See also:** [Evans, Kolesnikov, and Rosulek](https://securecomputation.org/), Chapter 3 | JH | --- | 10/2 | Overview and basic constructions <br> **Reading:** [Boneh and Shoup](https://crypto.stanford.edu/~dabo/cryptobook/BonehShoup_0_4.pdf), 11.6, 19.4 <br> **See also:** [Evans, Kolesnikov, and Rosulek](https://securecomputation.org/), Chapter 3 | Justin | --- |
10/2 | SMC for machine learning <br> **Reading:** [*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212) <br> **See also:** [*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329) | Varun/Vibhor/Adarsh | --- | 10/5 | Secure data collection at scale <br> **Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) | --- | --- |
10/4 | Secure data collection at scale <br> **Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) | Abhirav/Rajan | --- | 10/7 | Verifiable computing <br> **Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | --- | --- |
10/7 | Verifiable computing <br> **Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | JH | --- | 10/9 | Side channels and implementation issues <br> **Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | --- | --- |
10/9 | Side channels and implementation issues <br> **Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | JH | --- | 10/12 | Model watermarking <br> **Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) <br> **See also:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf) | --- | --- | MS1 Due
10/11 | Model watermarking <br> **Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) <br> **See also:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf) | Noor/Shashank | Joseph/Nils| MS1 Due
| <center> <h4> **Algorithmic Fairness** </h4> </center> | | | | <center> <h4> **Algorithmic Fairness** </h4> </center> | | |
10/14 | Overview and basic notions <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/index.html), Chapter 1-2 | JH | --- | 10/14 | Overview and basic notions <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/index.html), Chapter 1-2 <br> **See also:** [*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104) | Justin | --- |
10/16 | Individual and group fairness <br> **Reading:** [*Fairness through Awarness*](https://arxiv.org/pdf/1104.3913) <br> **Reading:** [*Equality of Opportunity in Supervised Learning*](https://arxiv.org/pdf/1610.02413) | JH | Jack/Jack | 10/16 | Individual and group fairness <br> **Reading:** [*Fairness through Awarness*](https://arxiv.org/pdf/1104.3913) <br> **Reading:** [*Equality of Opportunity in Supervised Learning*](https://arxiv.org/pdf/1610.02413) | --- | --- |
10/18 | Inherent tradeoffs <br> **Reading:** [*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807) | Bobby | --- | 10/19 | Inherent tradeoffs <br> **Reading:** [*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807) | --- | --- |
10/21 | Defining fairness: challenges <br> **Reading:** [*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104) <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/causal.html), Chapter 4 | JH | Bobby | 10/21 | Fairness and causality <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/causal.html), Chapter 4 | Justin | --- |
10/23 | Fairness in unsupervised learning <br> **Reading:** [*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520) <br> **See also:** [*Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints*](https://arxiv.org/pdf/1707.09457) | Zach/Jialu | Noor/Shashank | 10/23 | Fairness in unsupervised learning <br> **Reading:** [*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520) <br> **See also:** [*Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints*](https://arxiv.org/pdf/1707.09457) | --- | --- |
10/25 | Beyond observational measures <br> **Reading:** [*Avoiding Discrimination through Causal Reasoning*](https://arxiv.org/pdf/1706.02744) <br> **See also:** [*Counterfactual Fairness*](https://arxiv.org/pdf/1703.06856) | Nat/Geetika | Varun/Vibhor/Adarsh | 10/26 | Testing fairness, empirically <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/causal.html), Chapter 5 | Justin | --- |
| <center> <h4> **PL and Verification** </h4> </center> | | | | <center> <h4> **PL and Verification** </h4> </center> | | |
10/28 | Overview and basic notions | JH | --- | 10/28 | Overview and basic notions | Justin | --- |
10/30 | Probabilistic programming languages <br> **Reading:** [*Probabilistic Programming*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/fose-icse2014.pdf) | Miru/Pierre | Nat/Geetika | 10/30 | Probabilistic programming languages <br> **Reading:** [*Probabilistic Programming*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/fose-icse2014.pdf) | --- | --- |
11/1 | Automata learning and interpretability <br> **Reading:** [*Model Learning*](https://m-cacm.acm.org/magazines/2017/2/212445-model-learning/fulltext) <br> **Reading:** [*Interpreting Finite Automata for Sequential Data*](https://arxiv.org/pdf/1611.07100) | Jack/Jack | Abhirav/Rajan | 11/2 | Verifying probabilistic programs <br> **Reading:** [*A Program Logic for Union Bounds*](https://arxiv.org/pdf/1602.05681) <br> **See also:** [*Advances and Challenges of Probabilistic Model Checking*](https://www.prismmodelchecker.org/papers/allerton10.pdf) | --- | --- |
11/4 | Programming languages for differential privacy <br> **Reading:** [*Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy*](https://www.cis.upenn.edu/~bcpierce/papers/dp.pdf) <br> **See also:** [*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf) | JH | --- | 11/4 | Languages for differential privacy <br> **Reading:** [*Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy*](https://www.cis.upenn.edu/~bcpierce/papers/dp.pdf) <br> **See also:** [*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf) | --- | --- |
11/6 | Verifying neural networks <br> **Reading:** [*AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation*](https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf) <br> **See also:** [*DL2: Training and Querying Neural Networks with Logic*](http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf) | JH | --- | 11/6 | Verifying neural networks <br> **Reading:** [*AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation*](https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf) <br> **See also:** [*DL2: Training and Querying Neural Networks with Logic*](http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf) | --- | --- | MS2 Due
11/8 | Verifying probabilistic programs <br> **Reading:** [*A Program Logic for Union Bounds*](https://arxiv.org/pdf/1602.05681) <br> **See also:** [*Advances and Challenges of Probabilistic Model Checking*](https://www.prismmodelchecker.org/papers/allerton10.pdf) | JH | | MS2 Due | <center> <h4> **No Lectures: Work on Projects** </h4> </center> | | |
| <center> <h4> **No&nbsp;Lectures:&nbsp;Work&nbsp;on&nbsp;Projects** </h4> </center> | | | 12/7 | <center> **Project Presentations** </center> | | |
12/9 | Project Presentations 1 <br> - Nils, Joseph, Abhirav <br> - Robert, Noor, Shashank <br> - Jack L., Geetika <br> - Zi | | | 12/9 | <center> **Project Presentations** </center> | | |
12/11 | Project Presentations 2 <br> - Vibhor, Varun, Adarsh <br> - Siddhant, Goutham, Somya <br> - Nat, Zach, Jialu <br> - Miru, Pierre, Jack S. <br> - Shengwen, Rajan, Bobby | | | Projects Due 12/11 | <center> **PROJECTS DUE** </center> | | | Projects Due