Adjusting paper presentation format.

Plan: fix topic for each lecture and two suggested papers each.
This commit is contained in:
Justin Hsu 2019-08-22 18:39:38 -05:00
parent 12236ac934
commit fc344fadbb
3 changed files with 65 additions and 45 deletions

View File

@ -25,49 +25,47 @@ These three components are detailed below.
### Paper presentations ### Paper presentations
**Paper discussions** are one of the main components of this course. Before **Paper discussions** are one of the main components of this course. In groups
every presentation, you are expected to read the paper closely and understand of two (or very rarely three), you will present 2-3 papers on a related topic
its significance, including (a) the main problem addressed by the paper, (b) the and lead the discussion; we will have presentations most Wednesdays and Fridays.
primary contributions of the paper, and (c) how the authors solve the problem in Your presentation should last about **60 minutes** long, leaving the remainder
some technical detail. Of course, you are also expected to attend discussions of the time for a wrap-up discussion. Please sign up for a slot and a paper by
and actively participate in the discussion. **Monday, September 9**; while we will try to accommodate everyone's interests,
we may need to adjust the selections for better balance and coverage.
The topics we will be reading about are from the recent research Before every presentation, all students are expected to read the papers closely
literature---peer-reviewed and published, but not completely refined. Most and understand their significance, including (a) the main problems, (b) the
primary contributions, and (c) how the technical solution. Of course, you are
also expected to attend discussions and actively participate in the discussion.
We will be reading about topics from the recent research literature. Most
research papers focus on a very narrow topic and are written for a very specific research papers focus on a very narrow topic and are written for a very specific
technical audience. It also doesn't help that researchers are generally not the technical audience. It also doesn't help that researchers are generally not the
clearest writers, though there are certainly exceptions. These clearest writers, though there are certainly exceptions. These
[notes](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) by [notes](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) by
Srinivasan Keshav may help you get more out of reading papers. Srinivasan Keshav may help you get more out of reading papers.
To help you prepare for the class discussions, I will also send out a few
questions at least 24 hours before every paper presentation. **Before** each
lecture, you should send me brief answers---a short email is fine, no more than
a few sentences per question. These questions will help you check that you have
understood the papers---they are not meant to be very difficult or
time-consuming and they will not be graded in detail.
### Homeworks ### Homeworks
There will be three small homework assignments, one for each of the core There will be three small homework assignments, one for each of the core
modules. You will play with software implementations of the methods we cover in modules, where you will play with software implementations of the methods we
class. These assignments are not weighted heavily, though they will be lightly cover in class. These assignments will be lightly graded; the goal is to give
graded; the goal is to give you a chance to write some code. you a chance to write some code and run some experiments.
### Course Project ### Course Project
The main component is the **course project**. You will work individually or in The main course component is the **course project**. You will work individually
pairs on a topic of your choice, producing a conference-style write-up and or in pairs on a topic of your choice, producing a conference-style write-up and
presenting the project at the end of the semester. Successful projects may have presenting the project at the end of the semester. The best projects may
the potential to turn into an eventual research paper or survey. Details can be eventually lead to a research paper or survey. Details can be found
found [here](assignments/project.md). [here](assignments/project.md).
## Learning Outcomes ## Learning Outcomes
By the end of this course, you should be able to... By the end of this course, you should be able to...
- Summarize the basic concepts in differential privacy, applied cryptography, - Summarize the basic concepts in differential privacy, applied cryptography,
language-based security, and adversarial machine learning. and adversarial machine learning.
- Use techniques from differential privacy to design privacy-preserving data - Use techniques from differential privacy to design privacy-preserving data
analyses. analyses.
- Grasp the high-level concepts from research literature on the main course - Grasp the high-level concepts from research literature on the main course

View File

@ -24,6 +24,13 @@
ASIACRYPT 2015. ASIACRYPT 2015.
- Matthew Joseph, Aaron Roth, Jonathan Ullman, and Bo Waggoner. - Matthew Joseph, Aaron Roth, Jonathan Ullman, and Bo Waggoner.
[*Local Differential Privacy for Evolving Data*](https://arxiv.org/abs/1802.07128). [*Local Differential Privacy for Evolving Data*](https://arxiv.org/abs/1802.07128).
NIPS 2018.
- Albert Cheu, Adam Smith, Jonathan Ullman, David Zeber, and Maxim Zhilyaev.
[*Distributed Differential Privacy via Shuffling*](https://arxiv.org/pdf/1808.01394).
EUROCRYPT 2019.
- Jingcheng Liu and Kunal Talwar.
[*Private Selection from Private Candidates*](https://arxiv.org/pdf/1811.07971).
STOC 2019.
### Adversarial Machine Learning ### Adversarial Machine Learning
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. - Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus.
@ -47,6 +54,12 @@
- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. - Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.
[*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf). [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf).
ICLR 2018. ICLR 2018.
- Vitaly Feldman.
[*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271).
arXiv 2019.
- Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song.
[*The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks*](https://arxiv.org/pdf/1802.08232).
USENIX Security 2019.
### Applied Cryptography ### Applied Cryptography
- Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish. - Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
@ -73,6 +86,15 @@
- Arjun Narayan, Ariel Feldman, Antonis Papadimitriou, and Andreas Haeberlen. - Arjun Narayan, Ariel Feldman, Antonis Papadimitriou, and Andreas Haeberlen.
[*Verifiable Differential Privacy*](https://www.cis.upenn.edu/~ahae/papers/verdp-eurosys2015.pdf). [*Verifiable Differential Privacy*](https://www.cis.upenn.edu/~ahae/papers/verdp-eurosys2015.pdf).
EUROSYS 2015. EUROSYS 2015.
- Henry Corrigan-Gibbs and Dan Boneh.
[*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf).
NSDI 2017.
- Valerie Chen, Valerio Pastro, Mariana Raykova.
[*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329).
NIPS 2018.
- Wenting Zheng, Raluca Ada Popa, Joseph E. Gonzalez, Ion Stoica.
[*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212).
S&P 2019.
### Algorithmic Fairness ### Algorithmic Fairness
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel. - Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel.

View File

@ -7,34 +7,34 @@
9/6 | Basic private mechanisms <br> **Reading:** AFDP 3.2-4 | 9/6 | Basic private mechanisms <br> **Reading:** AFDP 3.2-4 |
9/9 | Composition and closure properties <br> **Reading:** AFDP 3.5 | Signups 9/9 | Composition and closure properties <br> **Reading:** AFDP 3.5 | Signups
9/11 | What does differential privacy actually mean? <br> **Reading:** McSherry. [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) | 9/11 | What does differential privacy actually mean? <br> **Reading:** McSherry. [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) |
9/13 | Paper presentations | HW1 Due 9/13 | Paper presentations: Differential privacy | HW1 Due
| <center> <h4> **Adversarial Machine Learning** </h4> </center> | | <center> <h4> **Adversarial Machine Learning** </h4> </center> |
9/16 | Overview and Basic attacks | HW2 Out 9/16 | Overview and basic concepts | HW2 Out
9/18 | More attacks | 9/18 | Paper presentations: Adversarial attacks |
9/20 | Paper presentations | 9/20 | Paper presentations: ??? |
9/23 | Defense: Adversarial training | 9/23 | Adversarial training |
9/25 | Defense: Certified defenses | 9/25 | Paper presentations: Certified defenses |
9/27 | Paper presentations | HW2 Due 9/27 | Paper presentations: ??? | HW2 Due
| <center> <h4> **Applied Cryptography** </h4> </center> | | <center> <h4> **Applied Cryptography** </h4> </center> |
9/30 | Overview and basic constructions | HW3 Out 9/30 | Overview and basic constructions | HW3 Out
10/2 | Secure Multiparty Computation | 10/2 | Paper presentations: Secure Multiparty Computation |
10/4 | Paper presentations | 10/4 | Paper presentations: ??? |
10/7 | Homomorphic Encryption | 10/7 | Homomorphic Encryption |
10/9 | Oblivious computing and side channels | 10/9 | Paper presentations: Oblivious computing and side channels |
10/11 | Paper presentations | HW3 Due <br> MS1 Due 10/11 | Paper presentations: ??? | HW3 Due <br> MS1 Due
| <center> <h4> **Advanced Topic: Algorithmic Fairness** </h4> </center> | | <center> <h4> **Advanced Topic: Algorithmic Fairness** </h4> </center> |
10/14 | Overview and basic notions | 10/14 | Overview and basic notions |
10/16 | Individual and group fairness | 10/16 | Paper presentations: Individual and group fairness |
10/18 | Paper presentations | 10/18 | Paper presentations: ??? |
10/21 | Repairing fairness | 10/21 | Challenges in defining fairness |
10/23 | Challenges in defining fairness | 10/23 | Paper presentations: Repairing fairness |
10/25 | Paper presentations | 10/25 | Paper presentations: ??? |
| <center> <h4> **Advanced Topic: PL and Verification** </h4> </center> | | <center> <h4> **Advanced Topic: PL and Verification** </h4> </center> |
10/28 | Overview and basic notions | 10/28 | Overview and basic notions |
10/30 | Programming languages for differential privacy | 10/30 | Paper presentations: Probabilistic programming languages |
11/1 | Paper presentations | 11/1 | Paper presentations: ??? |
11/4 | Probabilistic programming languages | 11/4 | Programming languages for differential privacy |
11/6 | Verifying probabilistic programs | 11/6 | Paper presentations: Verifying probabilistic programs |
11/8 | Paper presentations | MS2 Due 11/8 | Paper presentations: ??? | MS2 Due
| <center> <h4> **No Lectures: Work on Projects** </h4> </center> | | <center> <h4> **No Lectures: Work on Projects** </h4> </center> |
12/11 (TBD) | Project Presentations | 12/11 (TBD) | Project Presentations |