This repository has been archived on 2024-11-04. You can view files and clone it, but cannot push or open issues or pull requests.
cs763/website/docs/resources/readings.md

289 lines
18 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Assorted Papers
### Differential Privacy
- Frank McSherry and Kunal Talwar.
[*Mechanism Design via Differential Privacy*](http://kunaltalwar.org/papers/expmech.pdf).
FOCS 2007.
- Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy Rothblum.
[*Differential Privacy under Continual Observation*](http://www.wisdom.weizmann.ac.il/~naor/PAPERS/continual_observation.pdf).
STOC 2010.
- T.-H. Hubert Chan, Elaine Shi, and Dawn Song.
[*Private and Continual Release of Statistics*](https://eprint.iacr.org/2010/076.pdf).
ICALP 2010.
- Ilya Mironov.
[*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf).
CCS 2012.
- Moritz Hardt, Katrina Ligett, and Frank McSherry.
[*A Simple and Practical Algorithm for Differentially Private Data Release*](https://papers.nips.cc/paper/4548-a-simple-and-practical-algorithm-for-differentially-private-data-release.pdf).
NIPS 2012.
- Daniel Kifer and Ashwin Machanavajjhala.
[*A Rigorous and Customizable Framework for Privacy*](http://www.cse.psu.edu/~duk17/papers/pufferfish_preprint.pdf).
PODS 2012.
- Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova.
[*RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response*](https://arxiv.org/pdf/1407.6981.pdf).
CCS 2014.
- Cynthia Dwork, Moni Naor, Omer Reingold, and Guy N. Rothblum.
[*Pure Differential Privacy for Rectangle Queries via Private Partitions*](https://guyrothblum.files.wordpress.com/2017/06/dnrr15.pdf).
ASIACRYPT 2015.
- Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang.
[*Deep Learning with Differential Privacy*](https://arxiv.org/pdf/1607.00133).
CCS 2016.
- Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, and Li Zhang.
[*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022).
CSF 2016.
- Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar.
[*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755).
ICLR 2017.
- Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Úlfar Erlingsson.
[*Scalable Private Learning with PATE*](https://arxiv.org/pdf/1802.08908).
ICLR 2018.
- Matthew Joseph, Aaron Roth, Jonathan Ullman, and Bo Waggoner.
[*Local Differential Privacy for Evolving Data*](https://arxiv.org/abs/1802.07128).
NeurIPS 2018.
- Albert Cheu, Adam Smith, Jonathan Ullman, David Zeber, and Maxim Zhilyaev.
[*Distributed Differential Privacy via Shuffling*](https://arxiv.org/pdf/1808.01394).
EUROCRYPT 2019.
- Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta.
[*Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity*](https://arxiv.org/pdf/1811.12469).
SODA 2019.
- Jingcheng Liu and Kunal Talwar.
[*Private Selection from Private Candidates*](https://arxiv.org/pdf/1811.07971).
STOC 2019.
### Adversarial ML
- Battista Biggio, Blaine Nelson, and Pavel Laskov.
[*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389).
ICML 2012.
- Battista Biggio, Ignazio Pillai, Samuel Rota Bulò, Davide Ariu, Marcello Pelillo, and Fabio Roli.
[*Is Data Clustering in Adversarial Settings Secure?*](https://arxiv.org/abs/1811.09982).
AISec 2013.
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus.
[*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf).
ICLR 2014.
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.
[*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572).
ICLR 2015.
- Matt Fredrikson, Somesh Jha, and Thomas Ristenpart.
[*Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures*](https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf).
CCS 2015.
- Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow.
[*Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples*](https://arxiv.org/abs/1605.07277).
arXiv 2016.
- Nicholas Carlini and David Wagner.
[*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf).
S&P 2017.
- Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov.
[*Membership Inference Attacks against Machine Learning Models*](https://arxiv.org/pdf/1610.05820).
S&P 2017.
- Nicholas Carlini and David Wagner.
[*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf).
AISec 2017.
- Jacob Steinhardt, Pang Wei Koh, and Percy Liang.
[*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf).
NIPS 2017.
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song.
[*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf).
CVPR 2018.
- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.
[*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf).
ICLR 2018.
- Aditi Raghunathan, Jacob Steinhardt, and Percy Liang.
[*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344).
ICLR 2018.
- Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel.
[*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204).
ICLR 2018.
- Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein.
[*Poison Frogs! Targeted Clean-Label PoisoningAttacks on Neural Networks*](https://arxiv.org/pdf/1804.00792).
NeurIPS 2019.
- Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song.
[*The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks*](https://arxiv.org/pdf/1802.08232).
USENIX 2019.
- Vitaly Feldman.
[*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271).
STOC 2020.
### Applied Cryptography
- Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
[*Verifying Computations with State*](https://eprint.iacr.org/2013/356.pdf).
SOSP 2013.
- Bryan Parno, Jon Howell, Craig Gentry, and Mariana Raykova.
[*Pinocchio: Nearly Practical Verifiable Computation*](https://eprint.iacr.org/2013/279.pdf).
S&P 2013.
- Aseem Rastogi, Matthew A. Hammer and Michael Hicks.
[*Wysteria: A Programming Language for Generic, Mixed-Mode Multiparty Computations*](http://www.cs.umd.edu/~aseem/wysteria-tr.pdf).
S&P 2014.
- Shai Halevi and Victor Shoup.
[*Algorithms in HElib*](https://www.shoup.net/papers/helib.pdf).
CRYPTO 2014.
- Shai Halevi and Victor Shoup.
[*Bootstrapping for HElib*](https://www.shoup.net/papers/boot.pdf).
EUROCRYPT 2015.
- Léo Ducas and Daniele Micciancio.
[*FHEW: Bootstrapping Homomorphic Encryption in Less than a Second*](https://eprint.iacr.org/2014/816.pdf).
EUROCRYPT 2015.
- Peter Kairouz, Sewoong Oh, and Pramod Viswanath.
[*Secure Multi-party Differential Privacy*](https://papers.nips.cc/paper/6004-secure-multi-party-differential-privacy.pdf).
NIPS 2015.
- Arjun Narayan, Ariel Feldman, Antonis Papadimitriou, and Andreas Haeberlen.
[*Verifiable Differential Privacy*](https://www.cis.upenn.edu/~ahae/papers/verdp-eurosys2015.pdf).
EUROSYS 2015.
- Henry Corrigan-Gibbs and Dan Boneh.
[*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf).
NSDI 2017.
- Zahra Ghodsi, Tianyu Gu, Siddharth Garg.
[*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268).
NIPS 2017.
- Valerie Chen, Valerio Pastro, Mariana Raykova.
[*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329).
NeurIPS 2018.
- Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph. Stoecklin, Heqing Huang, and Ian Molloy.
[*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf).
AsiaCCS 2018.
- Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet.
[*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633).
USENIX 2018.
- Wenting Zheng, Raluca Ada Popa, Joseph E. Gonzalez, Ion Stoica.
[*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212).
S&P 2019.
- Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar.
[*DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models*](https://arxiv.org/pdf/1804.00750).
ASPLOS 2019.
- Roshan Dathathri, Olli Saarikivi, Hao Chen, Kim Laine, Kristin Lauter, Saeed Maleki, Madanlal Musuvathi, and Todd Mytkowicz.
[*CHET: an optimizing compiler for fully-homomorphic neural-network inferencing*](https://dl.acm.org/ft_gateway.cfm?id=3314628&ftid=2065506&dwn=1).
PLDI 2019.
### Algorithmic Fairness
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel.
[*Fairness through Awarness*](https://arxiv.org/pdf/1104.3913).
ITCS 2012.
- Moritz Hardt, Eric Price, and Nathan Srebro.
[*Equality of Opportunity in Supervised Learning*](https://arxiv.org/pdf/1610.02413).
NIPS 2016.
- Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai.
[*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520).
NIPS 2016.
- Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang.
[*Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints*](https://arxiv.org/pdf/1707.09457).
EMNLP 2017.
- Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan.
[*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807).
ITCS 2017.
- Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf.
[*Avoiding Discrimination through Causal Reasoning*](https://arxiv.org/pdf/1706.02744).
NIPS 2017.
- Matt J. Kusner, Joshua R. Loftus, Chris Russell, Ricardo Silva.
[*Counterfactual Fairness*](https://arxiv.org/pdf/1703.06856).
NIPS 2017.
- Razieh Nabi and Ilya Shpitser.
[*Fair Inference on Outcomes*](https://arxiv.org/pdf/1705.10378).
AAAI 2018.
- Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, and Guy N. Rothblum.
[*Multicalibration: Calibration for the (Computationally-Identifiable) Masses*](https://arxiv.org/pdf/1711.08513.pdf).
ICML 2018.
- Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu.
[*Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness*](https://arxiv.org/pdf/1711.05144).
ICML 2018.
- Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach.
[*A Reductions Approach to Fair Classification*](https://arxiv.org/pdf/1803.02453).
ICML 2019.
- Ben Hutchinson and Margaret Mitchell.
[*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104).
FAT\* 2019.
### PL and Verification
- Martín Abadi and Andrew D. Gordon.
[*A Calculus for Cryptographic Protocols: The Spi Calculus*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/ic99spi.pdf).
Information and Computation, 1999.
- Noah Goodman, Vikash Mansinghka, Daniel M. Roy, Keith Bonawitz, Joshua B. Tenenbaum.
[*Church: a language for generative models*](https://arxiv.org/pdf/1206.3255).
UAI 2008.
- Frank McSherry.
[*Privacy Integrated Queries*](http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.211.4503).
SIGMOD 2009.
- Marta Kwiatkowska, Gethin Norman, and David Parker.
[*Advances and Challenges of Probabilistic Model Checking*](https://www.prismmodelchecker.org/papers/allerton10.pdf).
Allerton 2010.
- Jason Reed and Benjamin C. Pierce.
[*Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy*](https://www.cis.upenn.edu/~bcpierce/papers/dp.pdf).
ICFP 2010.
- Daniel B. Griffin, Amit Levy, Deian Stefan, David Terei, David Mazières, John C. Mitchell, and Alejandro Russo.
[*Hails: Protecting Data Privacy in Untrusted Web Applications*](https://www.usenix.org/system/files/conference/osdi12/osdi12-final-35.pdf).
OSDI 2012.
- Danfeng Zhang, Aslan Askarov, and Andrew C. Myers.
[*Language-Based Control and Mitigation of Timing Channels*](https://www.cs.cornell.edu/andru/papers/pltiming-pldi12.pdf).
PLDI 2012.
- Andrew Miller, Michael Hicks, Jonathan Katz, and Elaine Shi.
[*Authenticated Data Structures, Generically*](https://www.cs.umd.edu/~mwh/papers/gpads.pdf).
POPL 2014.
- Andrew D. Gordon, Thomas A. Henzinger, Aditya V. Nori, and Sriram K. Rajamani.
[*Probabilistic Programming*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/fose-icse2014.pdf).
ICSE 2014.
- Gilles Barthe, Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, and Pierre-Yves Strub.
[*Higher-Order Approximate Relational Refinement Types for Mechanism Design and Differential Privacy*](https://arxiv.org/pdf/1407.6845.pdf).
POPL 2015.
- Samee Zahur and David Evans.
[*Obliv-C: A Language for Extensible Data-Oblivious Computation*](https://eprint.iacr.org/2015/1153.pdf).
IACR 2015.
- Chang Liu, Xiao Shaun Wang, Kartik Nayak, Yan Huang, and Elaine Shi.
[*ObliVM: A Programming Framework for Secure Computation*](http://www.cs.umd.edu/~elaine/docs/oblivm.pdf).
S&P 2015.
- Gilles Barthe, Marco Gaboardi, Benjamin Grégoire, Justin Hsu, and Pierre-Yves Strub.
[*A Program Logic for Union Bounds*](https://arxiv.org/pdf/1602.05681).
ICALP 2016.
- Christian Albert Hammerschmidt, Sicco Verwer, Qin Lin, and Radu State.
[*Interpreting Finite Automata for Sequential Data*](https://arxiv.org/pdf/1611.07100).
NIPS 2016.
- Joost-Pieter Katoen.
[*The Probabilistic Model Checking Landscape*](https://moves.rwth-aachen.de/wp-content/uploads/lics2016_tutorial_katoen.pdf).
LICS 2016.
- Andrew Ferraiuolo, Rui Xu, Danfeng Zhang, Andrew C. Myers, and G. Edward Suh.
[*Verification of a Practical Hardware Security Architecture Through Static Information Flow Analysis*](http://www.cse.psu.edu/~dbz5017/pub/asplos17.pdf).
ASPLOS 2017.
- Frits Vaandrager.
[*Model Learning*](https://m-cacm.acm.org/magazines/2017/2/212445-model-learning/fulltext).
CACM 2017.
- Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev
[*AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation*](https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf).
S&P 2018.
- Matthew Mirman, Timon Gehr, and Martin Vechev.
[*Differentiable Abstract Interpretation for Provably Robust Neural Networks*](http://proceedings.mlr.press/v80/mirman18b/mirman18b.pdf).
ICML 2018.
- Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind.
[*Automatic differentiation in machine learning: a survey*](https://arxiv.org/pdf/1502.05767).
JMLR 2018.
- Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin T. Vechev.
[*An Abstract Domain for Certifying Neural Networks*](https://files.sri.inf.ethz.ch/website/papers/DeepPoly.pdf).
POPL 2019.
- Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, and Martin Vechev.
[*DL2: Training and Querying Neural Networks with Logic*](http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf).
ICML 2019.
- Abhinav Verma, Hoang M. Le, Yisong Yue, and Swarat Chaudhuri.
[*Imitation-Projected Programmatic Reinforcement Learning*](https://arxiv.org/pdf/1907.05431).
NeurIPS 2019.
- Kenneth L. McMillan
[*Bayesian Interpolants as Explanations for Neural Inferences*](https://arxiv.org/abs/2004.04198).
arXiv.
# Supplemental Material
- Cynthia Dwork and Aaron Roth.
[*Algorithmic Foundations of Data Privacy*](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf).
- Solon Barocas, Moritz Hardt, and Arvind Narayanan.
[*Fairness and Machine Learning: Limitations and Opportunities*](https://fairmlbook.org/index.html).
- Gilles Barthe, Marco Gaboardi, Justin Hsu, and Benjamin C. Pierce.
[*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf).
- Michael Walfish and Andrew J. Blumberg.
[*Verifying Computations without Reexecuting Them*](http://delivery.acm.org/10.1145/2650000/2641562/p74-walfish.pdf?ip=24.59.48.254&id=2641562&acc=OA&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E757E42EE4C319386&__acm__=1533144327_267b96b7bd723efc52072f0f79f6720d).
- Véronique Cortier, Steve Kremer, and Bogdan Warinschi.
[*A Survey of Symbolic Methods in Computational Analysis of Cryptographic Systems*](https://hal.inria.fr/inria-00379776/document).
- Dan Boneh and Victor Shoup.
[*A Graduate Course in Applied Cryptography*](http://toc.cryptobook.us/).
- David Hand.
[*Statistics and the Theory of Measurement*](http://www.lps.uci.edu/~johnsonk/CLASSES/MeasurementTheory/Hand1996.StatisticsAndTheTheoryOfMeasurement.pdf).
- Judea Pearl.
[*Causal inference in statistics: An overview*](http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf).
- Judea Pearl.
[*Understanding Simpsons Paradox*](https://ftp.cs.ucla.edu/pub/stat_ser/r414.pdf).
- Yehuda Lindell and Benny Pinkas.
[*Secure Multiparty Computation for Privacy-Preserving Data Mining*](https://eprint.iacr.org/2008/197.pdf).