Fill out schedule with papers.

This commit is contained in:
Justin Hsu 2019-08-23 18:22:44 -05:00
parent fc344fadbb
commit 4bbc39bf34
4 changed files with 145 additions and 55 deletions

View File

@ -26,12 +26,13 @@ These three components are detailed below.
### Paper presentations
**Paper discussions** are one of the main components of this course. In groups
of two (or very rarely three), you will present 2-3 papers on a related topic
and lead the discussion; we will have presentations most Wednesdays and Fridays.
Your presentation should last about **60 minutes** long, leaving the remainder
of the time for a wrap-up discussion. Please sign up for a slot and a paper by
**Monday, September 9**; while we will try to accommodate everyone's interests,
we may need to adjust the selections for better balance and coverage.
of two (or very rarely three), you will present 1-2 papers on a related topic
and lead the discussion. We will have presentations most Wednesdays and Fridays,
Each presentation should be about **60 minutes**, leaving the remainder of the
time for a wrap-up discussion. Please sign up for a slot by **Monday, September
9**; see the [calendar](schedule/lectures.md) for the topic and suggested papers
for each slot. While we will try to accommodate everyone's interests, we may
need to adjust the selections for better balance and coverage.
Before every presentation, all students are expected to read the papers closely
and understand their significance, including (a) the main problems, (b) the

View File

@ -1,4 +1,4 @@
# Paper Suggestions
# Assorted Papers
### Differential Privacy
- Frank McSherry and Kunal Talwar.
@ -10,6 +10,9 @@
- T.-H. Hubert Chan, Elaine Shi, and Dawn Song.
[*Private and Continual Release of Statistics*](https://eprint.iacr.org/2010/076.pdf).
ICALP 2010.
- Ilya Mironov.
[*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf).
CCS 2012.
- Moritz Hardt, Katrina Ligett, and Frank McSherry.
[*A Simple and Practical Algorithm for Differentially Private Data Release*](https://papers.nips.cc/paper/4548-a-simple-and-practical-algorithm-for-differentially-private-data-release.pdf).
NIPS 2012.
@ -22,44 +25,71 @@
- Cynthia Dwork, Moni Naor, Omer Reingold, and Guy N. Rothblum.
[*Pure Differential Privacy for Rectangle Queries via Private Partitions*](https://guyrothblum.files.wordpress.com/2017/06/dnrr15.pdf).
ASIACRYPT 2015.
- Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang.
[*Deep Learning with Differential Privacy*](https://arxiv.org/pdf/1607.00133).
CCS 2016.
- Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, and Li Zhang.
[*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022).
CSF 2016.
- Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar.
[*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755).
ICLR 2017.
- Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Úlfar Erlingsson.
[*Scalable Private Learning with PATE*](https://arxiv.org/pdf/1802.08908).
ICLR 2018.
- Matthew Joseph, Aaron Roth, Jonathan Ullman, and Bo Waggoner.
[*Local Differential Privacy for Evolving Data*](https://arxiv.org/abs/1802.07128).
NIPS 2018.
NeurIPS 2018.
- Albert Cheu, Adam Smith, Jonathan Ullman, David Zeber, and Maxim Zhilyaev.
[*Distributed Differential Privacy via Shuffling*](https://arxiv.org/pdf/1808.01394).
EUROCRYPT 2019.
- Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta.
[*Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity*](https://arxiv.org/pdf/1811.12469).
SODA 2019.
- Jingcheng Liu and Kunal Talwar.
[*Private Selection from Private Candidates*](https://arxiv.org/pdf/1811.07971).
STOC 2019.
### Adversarial Machine Learning
### Adversarial ML
- Battista Biggio, Blaine Nelson, and Pavel Laskov.
[*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389).
ICML 2012.
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus.
[*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf).
ICLR 2014.
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.
[*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572).
ICLR 2015.
- Matt Fredrikson, Somesh Jha, and Thomas Ristenpart.
[*Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures*](https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf).
CCS 2015.
- Nicholas Carlini and David Wagner.
[*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf).
S&P 2017.
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song.
[*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf).
CVPR 2018.
- Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov.
[*Membership Inference Attacks against Machine Learning Models*](https://arxiv.org/pdf/1610.05820).
S&P 2017.
- Nicholas Carlini and David Wagner.
[*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf).
AISec 2017.
- Jacob Steinhardt, Pang Wei Koh, and Percy Liang.
[*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf).
NIPS 2017.
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song.
[*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf).
CVPR 2018.
- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.
[*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf).
ICLR 2018.
- Aditi Raghunathan, Jacob Steinhardt, and Percy Liang.
[*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344).
ICLR 2018.
- Vitaly Feldman.
[*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271).
arXiv 2019.
- Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song.
[*The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks*](https://arxiv.org/pdf/1802.08232).
USENIX Security 2019.
USENIX 2019.
### Applied Cryptography
- Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
@ -89,12 +119,24 @@
- Henry Corrigan-Gibbs and Dan Boneh.
[*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf).
NSDI 2017.
- Zahra Ghodsi, Tianyu Gu, Siddharth Garg.
[*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268).
NIPS 2017.
- Valerie Chen, Valerio Pastro, Mariana Raykova.
[*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329).
NIPS 2018.
NeurIPS 2018.
- Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph. Stoecklin, Heqing Huang, and Ian Molloy.
[*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf).
AsiaCCS 2018.
- Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet.
[*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633).
USENIX 2018.
- Wenting Zheng, Raluca Ada Popa, Joseph E. Gonzalez, Ion Stoica.
[*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212).
S&P 2019.
- Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar.
[*DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models*](https://arxiv.org/pdf/1804.00750).
ASPLOS 2019.
### Algorithmic Fairness
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel.
@ -106,9 +148,21 @@
- Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai.
[*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520).
NIPS 2016.
- Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang.
[*Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints*](https://arxiv.org/pdf/1707.09457).
EMNLP 2017.
- Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan.
[*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807).
ITCS 2017.
- Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf.
[*Avoiding Discrimination through Causal Reasoning*](https://arxiv.org/pdf/1706.02744).
NIPS 2017.
- Matt J. Kusner, Joshua R. Loftus, Chris Russell, Ricardo Silva.
[*Counterfactual Fairness*](https://arxiv.org/pdf/1703.06856).
NIPS 2017.
- Razieh Nabi and Ilya Shpitser.
[*Fair Inference on Outcomes*](https://arxiv.org/pdf/1705.10378).
AAAI 2018.
- Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, and Guy N. Rothblum.
[*Multicalibration: Calibration for the (Computationally-Identifiable) Masses*](https://arxiv.org/pdf/1711.08513.pdf).
ICML 2018.
@ -122,13 +176,19 @@
[*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104).
FAT\* 2019.
### Programming Languages and Verification
### PL and Verification
- Martín Abadi and Andrew D. Gordon.
[*A Calculus for Cryptographic Protocols: The Spi Calculus*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/ic99spi.pdf).
Information and Computation, 1999.
- Noah Goodman, Vikash Mansinghka, Daniel M. Roy, Keith Bonawitz, Joshua B. Tenenbaum.
[*Church: a language for generative models*](https://arxiv.org/pdf/1206.3255).
UAI 2008.
- Frank McSherry.
[*Privacy Integrated Queries*](http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.211.4503).
SIGMOD 2009.
- Marta Kwiatkowska, Gethin Norman, and David Parker.
[*Advances and Challenges of Probabilistic Model Checking*](https://www.prismmodelchecker.org/papers/allerton10.pdf).
Allerton 2010.
- Jason Reed and Benjamin C. Pierce.
[*Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy*](https://www.cis.upenn.edu/~bcpierce/papers/dp.pdf).
ICFP 2010.
@ -141,6 +201,9 @@
- Andrew Miller, Michael Hicks, Jonathan Katz, and Elaine Shi.
[*Authenticated Data Structures, Generically*](https://www.cs.umd.edu/~mwh/papers/gpads.pdf).
POPL 2014.
- Andrew D. Gordon, Thomas A. Henzinger, Aditya V. Nori, and Sriram K. Rajamani.
[*Probabilistic Programming*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/fose-icse2014.pdf).
ICSE 2014.
- Gilles Barthe, Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, and Pierre-Yves Strub.
[*Higher-Order Approximate Relational Refinement Types for Mechanism Design and Differential Privacy*](https://arxiv.org/pdf/1407.6845.pdf).
POPL 2015.
@ -150,9 +213,27 @@
- Chang Liu, Xiao Shaun Wang, Kartik Nayak, Yan Huang, and Elaine Shi.
[*ObliVM: A Programming Framework for Secure Computation*](http://www.cs.umd.edu/~elaine/docs/oblivm.pdf).
S&P 2015.
- Gilles Barthe, Marco Gaboardi, Benjamin Grégoire, Justin Hsu, and Pierre-Yves Strub.
[*A Program Logic for Union Bounds*](https://arxiv.org/pdf/1602.05681).
ICALP 2016.
- Christian Albert Hammerschmidt, Sicco Verwer, Qin Lin, and Radu State.
[*Interpreting Finite Automata for Sequential Data*](https://arxiv.org/pdf/1611.07100).
NIPS 2016.
- Joost-Pieter Katoen.
[*The Probabilistic Model Checking Landscape*](https://moves.rwth-aachen.de/wp-content/uploads/lics2016_tutorial_katoen.pdf).
LICS 2016.
- Andrew Ferraiuolo, Rui Xu, Danfeng Zhang, Andrew C. Myers, and G. Edward Suh.
[*Verification of a Practical Hardware Security Architecture Through Static Information Flow Analysis*](http://www.cse.psu.edu/~dbz5017/pub/asplos17.pdf).
ASPLOS 2017.
- Frits Vaandrager.
[*Model Learning*](https://m-cacm.acm.org/magazines/2017/2/212445-model-learning/fulltext).
CACM 2017.
- Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev
[*AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation*](https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf).
S&P 2018.
- Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, and Martin Vechev.
[*DL2: Training and Querying Neural Networks with Logic*](http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf).
ICML 2019.
# Supplemental Material
- Cynthia Dwork and Aaron Roth.
@ -165,3 +246,9 @@
[*A Survey of Symbolic Methods in Computational Analysis of Cryptographic Systems*](https://hal.inria.fr/inria-00379776/document).
- Dan Boneh and Victor Shoup.
[*A Graduate Course in Applied Cryptography*](https://crypto.stanford.edu/~dabo/cryptobook/BonehShoup_0_4.pdf).
- David Hand.
[*Statistics and the Theory of Measurement*](http://www.lps.uci.edu/~johnsonk/CLASSES/MeasurementTheory/Hand1996.StatisticsAndTheTheoryOfMeasurement.pdf).
- Judea Pearl.
[*Causal inference in statistics: An overview*](http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf).
- Yehuda Lindell and Benny Pinkas.
[*Secure Multiparty Computation for Privacy-Preserving Data Mining*](https://eprint.iacr.org/2008/197.pdf).

View File

@ -1,4 +1,6 @@
- CSE 291: [Language-Based Security](https://cseweb.ucsd.edu/~dstefan/cse291-winter18/) (Deian Stefan, UCSD)
- CSE 291: [Language-Based Security](https://cseweb.ucsd.edu/~dstefan/cse291-winter18/) (Deian Stefan, UC San Diego)
- CSE 711: [Topics in Differential Privacy](https://www.acsu.buffalo.edu/~gaboardi/teaching/CSE711-spring16.html) (Marco Gaboardi, University at Buffalo)
- CS 800: [The Algorithmic Foundations of Data Privacy](https://www.cis.upenn.edu/~aaroth/courses/privacyF11.html) (Aaron Roth, UPenn)
- CS 229r: [Mathematical Approaches to Data Privacy](http://people.seas.harvard.edu/~salil/diffprivcourse/spring13/) (Salil Vadhan, Harvard)
- CS 294: [Fairness in Machine Learning](https://fairmlclass.github.io/) (Moritz Hardt, UC Berkeley)
- CS 598: [Special Topics in Adversarial Machine Learning](http://www.crystal-boli.com/teaching.html) (Bo Li, UIUC)

View File

@ -1,40 +1,40 @@
# Calendar
# Calendar (tentative)
Date | Topic | Notes
:----:|-------|:---------:
| <center> <h4> **Differential Privacy** </h4> </center> |
9/4 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Reading:** Keshav. [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf). | HW1 Out
9/6 | Basic private mechanisms <br> **Reading:** AFDP 3.2-4 |
9/9 | Composition and closure properties <br> **Reading:** AFDP 3.5 | Signups
9/11 | What does differential privacy actually mean? <br> **Reading:** McSherry. [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) |
9/13 | Paper presentations: Differential privacy | HW1 Due
| <center> <h4> **Adversarial Machine Learning** </h4> </center> |
9/16 | Overview and basic concepts | HW2 Out
9/18 | Paper presentations: Adversarial attacks |
9/20 | Paper presentations: ??? |
9/23 | Adversarial training |
9/25 | Paper presentations: Certified defenses |
9/27 | Paper presentations: ??? | HW2 Due
| <center> <h4> **Applied Cryptography** </h4> </center> |
9/30 | Overview and basic constructions | HW3 Out
10/2 | Paper presentations: Secure Multiparty Computation |
10/4 | Paper presentations: ??? |
10/7 | Homomorphic Encryption |
10/9 | Paper presentations: Oblivious computing and side channels |
10/11 | Paper presentations: ??? | HW3 Due <br> MS1 Due
| <center> <h4> **Advanced Topic: Algorithmic Fairness** </h4> </center> |
10/14 | Overview and basic notions |
10/16 | Paper presentations: Individual and group fairness |
10/18 | Paper presentations: ??? |
10/21 | Challenges in defining fairness |
10/23 | Paper presentations: Repairing fairness |
10/25 | Paper presentations: ??? |
| <center> <h4> **Advanced Topic: PL and Verification** </h4> </center> |
10/28 | Overview and basic notions |
10/30 | Paper presentations: Probabilistic programming languages |
11/1 | Paper presentations: ??? |
11/4 | Programming languages for differential privacy |
11/6 | Paper presentations: Verifying probabilistic programs |
11/8 | Paper presentations: ??? | MS2 Due
| <center> <h4> **No Lectures: Work on Projects** </h4> </center> |
12/11 (TBD) | Project Presentations |
Date | Topic | Presenters | Notes
:----:|-------|:----------:|:-----:
| <center> <h4> **Differential Privacy** </h4> </center> | |
9/4 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Reading:** [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) | JH | HW1 Out
9/6 | Basic private mechanisms <br> **Reading:** AFDP 3.2-4 | JH |
9/9 | Composition and closure properties <br> **Reading:** AFDP 3.5 | JH | Paper Signups
9/11 | What does differential privacy actually mean? <br> **Reading:** [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) | JH |
9/13 | Differentially private machine learning <br> **Reading:** [*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022) <br> **Reading:** [*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755) | | HW1 Due
| <center> <h4> **Adversarial Machine Learning** </h4> </center> | |
9/16 | Overview and basic concepts | JH | HW2 Out
9/18 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572) <br> **Reading:** [*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf) | |
9/20 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) | |
9/23 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | JH |
9/25 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | |
9/27 | Adversarial training <br> **Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) | | HW2 Due
| <center> <h4> **Applied Cryptography** </h4> </center> | |
9/30 | Overview and basic constructions | JH | HW3 Out
10/2 | SMC for machine learning <br> **Reading:** [*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329) <br> **Reading:** [*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212) | |
10/4 | Secure data collection at scale <br> **Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) | |
10/7 | Verifiable computing <br> **Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | JH |
10/9 | Side channels and implementation issues <br> **Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | |
10/11 | Model watermarking <br> **Reading:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf) <br> **Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) | | HW3 Due <br> MS1 Due
| <center> <h4> **Advanced Topic: Algorithmic Fairness** </h4> </center> | |
10/14 | Overview and basic notions <br> **Reading:** Chapter 2 from [Barocas, Hardt, and Narayanan](https://fairmlbook.org/demographic.html) | JH |
10/16 | Individual and group fairness <br> **Reading:** [*Fairness through Awarness*](https://arxiv.org/pdf/1104.3913) <br> **Reading:** [*Equality of Opportunity in Supervised Learning*](https://arxiv.org/pdf/1610.02413) | |
10/18 | Inherent tradeoffs <br> **Reading:** [*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807) | |
10/21 | Defining fairness: challenges <br> **Reading:** [*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104) | JH |
10/23 | Fairness in unsupervised learning <br> **Reading:** [*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520) <br> **Reading:** [*Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints*](https://arxiv.org/pdf/1707.09457) | |
10/25 | Beyond observational measures <br> **Reading:** [*Avoiding Discrimination through Causal Reasoning*](https://arxiv.org/pdf/1706.02744) <br> **Reading:** [*Counterfactual Fairness*](https://arxiv.org/pdf/1703.06856) | |
| <center> <h4> **Advanced Topic: PL and Verification** </h4> </center> | |
10/28 | Overview and basic notions | JH |
10/30 | Probabilistic programming languages <br> **Reading:** [*Probabilistic Programming*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/fose-icse2014.pdf) | |
11/1 | Automata learning and interpretability <br> **Reading:** [*Model Learning*](https://m-cacm.acm.org/magazines/2017/2/212445-model-learning/fulltext) <br> **Reading:** [*Interpreting Finite Automata for Sequential Data*](https://arxiv.org/pdf/1611.07100) | |
11/4 | Programming languages for differential privacy <br> **Reading:** [*Programming Language Techniques for Differential Privacy*](https://dl.acm.org/citation.cfm?id=2893591&dl=ACM&coll=DL) | JH |
11/6 | Verifying neural networks <br> **Reading:** [*AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation*](https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf) <br> **Reading:** [*DL2: Training and Querying Neural Networks with Logic*](http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf) | |
11/8 | Verifying probabilistic programs <br> **Reading:** [*Advances and Challenges of Probabilistic Model Checking*](https://www.prismmodelchecker.org/papers/allerton10.pdf) <br> **Reading:** [*A Program Logic for Union Bounds*](https://arxiv.org/pdf/1602.05681) | | MS2 Due
| <center> <h4> **No Lectures: Work on Projects** </h4> </center> | |
12/11 (TBD) | Project Presentations | |