Course planning.
This commit is contained in:
parent
6b5991482f
commit
f19414f909
|
@ -1,12 +1,3 @@
|
|||
# Final Projects
|
||||
|
||||
- Yue Gao and Fayi Zhang. *Theory and Optimization of Homomorphic Encryption*.
|
||||
- Yinglun Zhu. *Answering Evolving Sets of Queries*.
|
||||
- Yan Nan and Shimaa Ahmed. *Private Voice Transcription*.
|
||||
- Samuel Drews. *Verifying Decision Tree Stability*.
|
||||
- Madeleine Berner and Yaman Yu. *Evaluate Adversarial Machine Learning Attacks
|
||||
on Chinese Character Recognition*.
|
||||
- Kyrie Zhou and Meghana Moorthy Bhat. *Detecting Fake News with NLP*.
|
||||
- Hiba Nassereddine and Junxiong Huang. *Adversarial Machine Learning and Autonomous Vehicles*.
|
||||
- Zichuan Tian and Arjun Kashyap. *pyDiff: Differential Privacy as a Library*.
|
||||
- Zhiyi Chen and Yiqin Pan. *Detect Compromised Items from Data with Adversarial Attacks*.
|
||||
TBA
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
# Project Details
|
||||
|
||||
The goal of the course project is to dive more deeply into a particular topic.
|
||||
The project can be completed in **groups of two or three**. A good project could
|
||||
lead to some kind of publication. This project could take different forms:
|
||||
The project can be completed in **groups of three** (or in rare situations,
|
||||
groups of two). A good project could lead to some kind of publication. This
|
||||
project could take different forms:
|
||||
|
||||
- **Conceptual**: Develop a new technique, extend an existing method, or explore
|
||||
a new application
|
||||
|
@ -36,8 +37,9 @@ should be clear what remains to be done.
|
|||
be done, along with reach goals to try if things go well.
|
||||
|
||||
Besides the milestones, the main deliverable of the project will be a written
|
||||
final report, around **15-20 pages** in length. Reports should be written in a
|
||||
research paper style, covering the following areas in some reasonable order:
|
||||
final report, around **15-20 pages** in length (in some reasonable format).
|
||||
Reports should be written in a research paper style, covering the following
|
||||
areas in some reasonable order:
|
||||
|
||||
- **Introduce** the problem and the motivation.
|
||||
- **Review** background and preliminary material.
|
||||
|
@ -46,7 +48,3 @@ research paper style, covering the following areas in some reasonable order:
|
|||
- **Survey** related work.
|
||||
|
||||
At the end of the course, each group will give a brief project presentation.
|
||||
|
||||
## Deadlines
|
||||
|
||||
See [here](../schedule/deadlines.md).
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
# Welcome to CS 763!
|
||||
|
||||
This is a graduate-level course covering advanced topics in security and privacy
|
||||
in data science. We will focus on three core areas at the current research
|
||||
frontier: **differential privacy**, **adversarial machine learning**, and
|
||||
**applied cryptography** in machine learning. We will also cover selected
|
||||
advanced topics; this year, **algorithmic fairness** and **formal verification**
|
||||
for data science. This is primarily a project-based course, though there will
|
||||
also be paper presentations and small homework assignments.
|
||||
in data science. The field is eclectic, and so is this course. We will start
|
||||
with three core areas: **differential privacy**, **adversarial machine
|
||||
learning**, and **applied cryptography** in machine learning. Then, we will
|
||||
cover two advanced topic areas; this year, **algorithmic fairness** and **formal
|
||||
verification** for data science. This is primarily a project-based course,
|
||||
though there will also be paper presentations and small homework assignments.
|
||||
|
||||
## Logistics
|
||||
- **Course**: CS 763, Fall 2019
|
||||
|
|
|
@ -49,10 +49,10 @@ time-consuming and they will not be graded in detail.
|
|||
|
||||
### Homeworks
|
||||
|
||||
After each of the first three core modules, we will assign a small homework
|
||||
assignment. These assignments are not weighed heavily---though they will be
|
||||
graded---but they are mostly for you to check that you have grasped the
|
||||
material.
|
||||
There will be three small homework assignments, one for each of the core
|
||||
modules. You will play with software implementations of the methods we cover in
|
||||
class. These assignments are not weighted heavily, though they will be lightly
|
||||
graded; the goal is to give you a chance to write some code.
|
||||
|
||||
### Course Project
|
||||
|
||||
|
|
|
@ -25,6 +25,29 @@
|
|||
- Matthew Joseph, Aaron Roth, Jonathan Ullman, and Bo Waggoner.
|
||||
[*Local Differential Privacy for Evolving Data*](https://arxiv.org/abs/1802.07128).
|
||||
|
||||
### Adversarial Machine Learning
|
||||
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus.
|
||||
[*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf).
|
||||
ICLR 2014.
|
||||
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.
|
||||
[*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572).
|
||||
ICLR 2015.
|
||||
- Nicholas Carlini and David Wagner.
|
||||
[*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf).
|
||||
S&P 2017.
|
||||
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song.
|
||||
[*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf).
|
||||
CVPR 2018.
|
||||
- Nicholas Carlini and David Wagner.
|
||||
[*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf).
|
||||
AISec 2017.
|
||||
- Jacob Steinhardt, Pang Wei Koh, and Percy Liang.
|
||||
[*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf).
|
||||
NIPS 2017.
|
||||
- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.
|
||||
[*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf).
|
||||
ICLR 2018.
|
||||
|
||||
### Applied Cryptography
|
||||
- Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
|
||||
[*Verifying Computations with State*](https://eprint.iacr.org/2013/356.pdf).
|
||||
|
@ -51,7 +74,33 @@
|
|||
[*Verifiable Differential Privacy*](https://www.cis.upenn.edu/~ahae/papers/verdp-eurosys2015.pdf).
|
||||
EUROSYS 2015.
|
||||
|
||||
### Language-Based Security
|
||||
### Algorithmic Fairness
|
||||
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel.
|
||||
[*Fairness through Awarness*](https://arxiv.org/pdf/1104.3913).
|
||||
ITCS 2012.
|
||||
- Moritz Hardt, Eric Price, and Nathan Srebro.
|
||||
[*Equality of Opportunity in Supervised Learning*](https://arxiv.org/pdf/1610.02413).
|
||||
NIPS 2016.
|
||||
- Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai.
|
||||
[*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520).
|
||||
NIPS 2016.
|
||||
- Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan.
|
||||
[*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807).
|
||||
ITCS 2017.
|
||||
- Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, and Guy N. Rothblum.
|
||||
[*Multicalibration: Calibration for the (Computationally-Identifiable) Masses*](https://arxiv.org/pdf/1711.08513.pdf).
|
||||
ICML 2018.
|
||||
- Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu.
|
||||
[*Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness*](https://arxiv.org/pdf/1711.05144).
|
||||
ICML 2018.
|
||||
- Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach.
|
||||
[*A Reductions Approach to Fair Classification*](https://arxiv.org/pdf/1803.02453).
|
||||
ICML 2019.
|
||||
- Ben Hutchinson and Margaret Mitchell.
|
||||
[*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104).
|
||||
FAT\* 2019.
|
||||
|
||||
### Programming Languages and Verification
|
||||
- Martín Abadi and Andrew D. Gordon.
|
||||
[*A Calculus for Cryptographic Protocols: The Spi Calculus*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/ic99spi.pdf).
|
||||
Information and Computation, 1999.
|
||||
|
@ -83,29 +132,6 @@
|
|||
[*Verification of a Practical Hardware Security Architecture Through Static Information Flow Analysis*](http://www.cse.psu.edu/~dbz5017/pub/asplos17.pdf).
|
||||
ASPLOS 2017.
|
||||
|
||||
### Adversarial Machine Learning
|
||||
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus.
|
||||
[*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf).
|
||||
ICLR 2014.
|
||||
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.
|
||||
[*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/abs/1412.6572).
|
||||
ICLR 2015.
|
||||
- Nicholas Carlini and David Wagner.
|
||||
[*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf).
|
||||
S&P 2017.
|
||||
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song.
|
||||
[*Robust Physical-World Attacks on Deep Learning Models*](https://arxiv.org/pdf/1707.08945.pdf).
|
||||
CVPR 2018.
|
||||
- Nicholas Carlini and David Wagner.
|
||||
[*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf).
|
||||
AISec 2017.
|
||||
- Jacob Steinhardt, Pang Wei Koh, and Percy Liang.
|
||||
[*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf).
|
||||
NIPS 2017.
|
||||
- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.
|
||||
[*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf).
|
||||
ICLR 2018.
|
||||
|
||||
# Supplemental Material
|
||||
- Cynthia Dwork and Aaron Roth.
|
||||
[*Algorithmic Foundations of Data Privacy*](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf).
|
||||
|
|
|
@ -1,15 +1,22 @@
|
|||
## Differential Privacy
|
||||
## Core software
|
||||
|
||||
- [TensorFlow](https://www.tensorflow.org/overview): Framework for writing,
|
||||
training, and testing neural networks. You should at least work through the
|
||||
[first tutorial](https://www.tensorflow.org/tutorials/keras/basic_classification).
|
||||
- [TensorFlow Privacy](https://github.com/tensorflow/privacy): Extensions to TF
|
||||
for differentially-private training.
|
||||
- [CleverHans](https://github.com/tensorflow/cleverhans): Extensions to TF for
|
||||
adversarial attacks and defenses on ML models.
|
||||
- [MPyC](https://github.com/lschoe/mpyc): Python libraries for Secure Multiparty
|
||||
Computation.
|
||||
|
||||
## Other tools
|
||||
|
||||
### Differential Privacy
|
||||
- [DFuzz](https://github.com/ejgallego/dfuzz)
|
||||
- [HOARe2](https://github.com/ejgallego/HOARe2)
|
||||
|
||||
## Cryptography
|
||||
### Cryptography
|
||||
- [HELib](https://github.com/shaih/HElib)
|
||||
- [Obliv-C](https://oblivc.org/)
|
||||
- [ObliVM](http://oblivm.com/download.html)
|
||||
|
||||
## Language-Based Security
|
||||
- [Jif](https://www.cs.cornell.edu/jif/)
|
||||
- [FlowCaml](https://opam.ocaml.org/packages/flowcaml/flowcaml.1.07/)
|
||||
|
||||
## Adversarial Machine Learning
|
||||
- [CleverHans](https://github.com/tensorflow/cleverhans)
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
The first key date is **September 16**. By this date, you should:
|
||||
The first key date is **September 9**. By this date, you should:
|
||||
|
||||
- **Check in** with me briefly.
|
||||
- **Sign up** to present a paper.
|
||||
- **Choose** a project topic and form groups. This is not a firm commitment, but
|
||||
you should have an initial direction.
|
||||
- **Form project groups** of three.
|
||||
- **Brainstorm** project topics. Try to come up with **1-2 sentences**
|
||||
describing your initial direction. This is not a firm commitment---you can
|
||||
change your topic as you learn more.
|
||||
|
||||
## Project Deadlines
|
||||
- Milestone 1: **October 7**
|
||||
- Milestone 1: **October 11**
|
||||
- Milestone 2: **November 8**
|
||||
- Final writeup and presentation: **December 11** (TBD)
|
||||
|
|
|
@ -3,38 +3,38 @@
|
|||
Date | Topic | Notes
|
||||
:----:|-------|:---------:
|
||||
| <center> <h4> **Differential Privacy** </h4> </center> |
|
||||
9/4 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Paper:** Keshav. [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf). |
|
||||
9/6 | |
|
||||
9/9 | |
|
||||
9/11 | |
|
||||
9/13 | |
|
||||
9/4 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Reading:** Keshav. [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf). | HW1 Out
|
||||
9/6 | Basic private mechanisms <br> **Reading:** AFDP 3.2-4 |
|
||||
9/9 | Composition and closure properties <br> **Reading:** AFDP 3.5 | Signups
|
||||
9/11 | What does differential privacy actually mean? <br> **Reading:** McSherry. [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) |
|
||||
9/13 | Paper presentations | HW1 Due
|
||||
| <center> <h4> **Adversarial Machine Learning** </h4> </center> |
|
||||
9/16 | |
|
||||
9/18 | |
|
||||
9/20 | |
|
||||
9/23 | |
|
||||
9/25 | |
|
||||
9/27 | |
|
||||
9/16 | Overview and Basic attacks | HW2 Out
|
||||
9/18 | More attacks |
|
||||
9/20 | Paper presentations |
|
||||
9/23 | Defense: Adversarial training |
|
||||
9/25 | Defense: Certified defenses |
|
||||
9/27 | Paper presentations | HW2 Due
|
||||
| <center> <h4> **Applied Cryptography** </h4> </center> |
|
||||
9/30 | |
|
||||
10/2 | |
|
||||
10/4 | |
|
||||
10/7 | |
|
||||
10/9 | |
|
||||
10/11 | |
|
||||
9/30 | Overview and basic constructions | HW3 Out
|
||||
10/2 | Secure Multiparty Computation |
|
||||
10/4 | Paper presentations |
|
||||
10/7 | Homomorphic Encryption |
|
||||
10/9 | Oblivious computing and side channels |
|
||||
10/11 | Paper presentations | HW3 Due <br> MS1 Due
|
||||
| <center> <h4> **Advanced Topic: Algorithmic Fairness** </h4> </center> |
|
||||
10/14 | |
|
||||
10/16 | |
|
||||
10/18 | |
|
||||
10/21 | |
|
||||
10/23 | |
|
||||
10/25 | |
|
||||
10/14 | Overview and basic notions |
|
||||
10/16 | Individual and group fairness |
|
||||
10/18 | Paper presentations |
|
||||
10/21 | Repairing fairness |
|
||||
10/23 | Challenges in defining fairness |
|
||||
10/25 | Paper presentations |
|
||||
| <center> <h4> **Advanced Topic: PL and Verification** </h4> </center> |
|
||||
10/28 | |
|
||||
10/30 | |
|
||||
11/1 | |
|
||||
11/4 | |
|
||||
11/6 | |
|
||||
11/8 | |
|
||||
10/28 | Overview and basic notions |
|
||||
10/30 | Programming languages for differential privacy |
|
||||
11/1 | Paper presentations |
|
||||
11/4 | Probabilistic programming languages |
|
||||
11/6 | Verifying probabilistic programs |
|
||||
11/8 | Paper presentations | MS2 Due
|
||||
| <center> <h4> **No Lectures: Work on Projects** </h4> </center> |
|
||||
12/11 (TBD) | Project Presentations |
|
||||
|
|
|
@ -29,5 +29,5 @@ nav:
|
|||
- Related Courses: 'resources/related.md'
|
||||
- Assignments:
|
||||
- Presentations: 'assignments/presentations.md'
|
||||
- Project: 'assignments/project.md'
|
||||
- Projects: 'assignments/project.md'
|
||||
- Gallery: 'assignments/gallery.md'
|
||||
|
|
Reference in New Issue