Compare commits

...

30 Commits
f19 ... master

Author SHA1 Message Date
Justin Hsu
4bfced411c Update archive target. 2023-03-29 23:08:57 -04:00
Justin Hsu
5f882d1b37 Change deploy and archive targets. 2021-01-14 17:02:31 +00:00
Justin Hsu
2c754e62df Update mkdocs. 2021-01-14 14:42:17 +00:00
Justin Hsu
2b06229e39 Add pandoc to install. 2021-01-14 12:25:14 +00:00
Justin Hsu
428afd9916 Remove Matt from presentation schedule. 2020-12-07 20:39:19 +00:00
Justin Hsu
00eed13f4e Presentation schedule. 2020-12-01 12:20:48 +00:00
Justin Hsu
90f7d01ea8 Shift project presentations earlier. 2020-11-10 18:34:15 -06:00
Justin Hsu
cce2268a74 Presentation days. 2020-11-10 13:41:27 -06:00
Justin Hsu
f12afeda37 Merge organization page into main page. 2020-10-27 11:27:33 -05:00
Justin Hsu
f89f3f167d Adjust readings. 2020-10-07 16:50:47 -05:00
Justin Hsu
b98be86a70 Update readings. 2020-09-22 18:10:26 -05:00
Justin Hsu
3116fd0303 Fill in schedule of presenters. 2020-09-14 20:12:09 -05:00
Justin Hsu
8f7b2bad9f Fix link. 2020-09-09 18:16:15 -05:00
Justin Hsu
6eba1659cf Edit links. 2020-09-09 18:08:21 -05:00
Justin Hsu
0678e7643e Link slides. 2020-09-02 15:53:24 -05:00
Justin Hsu
845636d34e Tweak grading. 2020-09-02 12:21:16 -05:00
Justin Hsu
afbfc2dee4 Adjust calendar. 2020-09-02 12:15:12 -05:00
Justin Hsu
29ea73bd67 Formatting. 2020-09-02 11:56:41 -05:00
Justin Hsu
ab3008f496 Swap order. 2020-09-01 18:47:56 -05:00
Justin Hsu
42be556bb5 Update slides. 2020-09-01 18:41:36 -05:00
Justin Hsu
2b0a174414 More grading. 2020-09-01 18:41:23 -05:00
Justin Hsu
cfedcb5d83 Adjust grading. 2020-08-24 18:34:11 -05:00
Justin Hsu
70210d7658 Adjust readings. 2020-08-24 18:29:45 -05:00
Justin Hsu
40ac4359ab More on reviewing. 2020-08-24 18:19:45 -05:00
Justin Hsu
7747928e05 Paper review option for remote students. 2020-08-20 17:27:32 -05:00
Justin Hsu
62a8cb85e1 Update link to Boneh and Shoup. 2020-08-17 17:57:09 -05:00
Justin Hsu
645ff1e28e Signup and deadlines. 2020-08-14 17:03:05 -05:00
Justin Hsu
a433994eb2 Update schedule for Fall 2020. 2020-08-14 16:57:21 -05:00
Justin Hsu
ea37a2e57d Update. 2020-08-14 10:52:14 -05:00
Justin Hsu
86d1a81cb3 Start preparing F20 webpage. 2020-08-14 10:51:54 -05:00
11 changed files with 299 additions and 176 deletions

View File

@ -1,6 +1,8 @@
HOST=wisc
DEPLOY_HOST=wisc
DEPLOY=/u/j/u/justhsu/public/html-s/teaching/current/cs763
# ARCHIVE=/u/j/u/justhsu/public/html-s/teaching/f19/cs763
ARCHIVE_HOST=jackfruit
ARCHIVE=/var/www/html/teaching/f20/cs763
build:
make assets && mkdocs build
@ -13,16 +15,18 @@ assets:
install:
pip install mkdocs mkdocs-material pymdown-extensions
(cabal new-update && cabal new-install pandoc)
deploy:
make build
find . -type d -exec chmod a+rx {} \;
find . -type f -exec chmod a+r {} \;
rsync -avzp --delete -e ssh ./site/ $(HOST):$(DEPLOY)
# ssh jackknife 'mkdir -p html/staging/cs763'
rsync -avzp --relative $(DEPLOY) -e ssh ./site/ $(DEPLOY_HOST)
archive:
make build
find . -type d -exec chmod a+rx {} \;
find . -type f -exec chmod a+r {} \;
ssh $(HOST) mkdir -p $(ARCHIVE)
rsync -avzp --delete -e ssh ./site/ $(HOST):$(ARCHIVE)
ssh $(ARCHIVE_HOST) mkdir -p $(ARCHIVE)
rsync -avzp --delete -e ssh ./site/ $(ARCHIVE_HOST):$(ARCHIVE)

View File

@ -1,3 +0,0 @@
# Final Projects
TBA

View File

@ -0,0 +1,68 @@
# Paper reviews
!!! attention
* Paper reviews are for students in time zones who **cannot** attend live
lectures.
* Students who are able to attend live lectures are required to complete a
paper presentation and presentation summary instead.
Starting from the first week of paper presentations (**September 14**), students
who cannot attend live lectures will complete **two paper reviews per week**,
**16** in all. We will be using HotCRP---standard conference management
software---to manage reviews. Reviews must be uploaded **before the paper is
presented in class**.
The HotCRP instance for this course is available here:
- <https://wisc-cs763-20.hotcrp.com/>
## What makes a good review?
A good review accomplishes several things:
- It **summarizes** the main contributions of the paper.
- It highlights **strengths and weaknesses** of the paper. Note that these
points do not need to be purely technical.
- It **evaluates** the paper, explaining why the reviewer thinks the paper is
strong or weak, interesting or not interesting.
- It gives authors **suggestions to improve** the paper.
## FAQ
- **Can I switch from doing paper presentation/summary to paper reviews or vice versa?**
No: if you are doing paper reviews, you must let me know on the first week of
class so that I can add you as a reviewer to HotCRP.
- **How long should reviews be?**
You should aim for around 400 words, total. We will not be counting words, but
if your review is three sentences long we will probably not be too happy.
- **Are late reviews accepted?**
No: reviews must be uploaded before the paper is presented in class.
- **Can I submit more than two reviews a week?**
No: should submit exactly two reviews per week.
- **I was not sure how to evaluate the paper: what should I do?**
You should say so, and explain the strengths and weaknesses of the paper.
- **It takes me too much time to read through two papers. What should I do?**
Given the short reviewing schedule, you will not have time to read through every
single word in every single paper. Instead, you should skim over parts that are
not so crucial. More concretely, you should do the first **two passes** of the
three-pass system described
[here](http://ccr.sigcomm.org/online/files/p83-keshavA.pdf)---the third pass is
**not** required.
- **I found a review of the paper online. Can I look at it for inspiration?**
Definitely not: doing so is an academic honesty violation. Anyways, there is
absolutely right or wrong conclusion when reviewing a paper---the idea is to
give **your opinion** of the paper based on **your understanding**, and then
argue why your opinion is correct.

View File

@ -1,37 +1,109 @@
# Welcome to CS 763!
!!! attention
* Due to COVID-19, CS 763 will be conducted **virtually**.
* All times are [Madison local time](https://www.timeanddate.com/worldclock/usa/madison).
This is a graduate-level course covering advanced topics in security and privacy
in data science. The field is eclectic, and so is this course. We will start
with three core areas: **differential privacy**, **adversarial machine
learning**, and **applied cryptography** in machine learning. Then, we will
cover two advanced topic areas; this year, **algorithmic fairness** and **formal
verification** for data science. This is primarily a project-based course,
though there will also be paper presentations and small homework assignments.
verification** for data science. This is a project based course: in small
groups, students will be expected to complete a final project on a technical
topic related to the course.
Besides covering technical material, this course will emphasize research skills:
**reading** research papers, **presenting** technical material, and **writing**
summaries and reviews.
## Logistics
- **Course**: CS 763, Fall 2019
- **Location**: CS 1263
- **Course**: CS 763, Fall 2020
- **Time**: Monday, Wednesday, Friday, 2:30-3:45
- **Location**: BB Collaborate Ultra (BBCU)
For the first ten weeks, lectures will be held on Monday, Wednesday, and Friday.
In the remaining five weeks, you will work on your course projects. Though there
are no lectures scheduled in this period, I will be available to meet as needed.
## Mailing List
We will be using **Piazza** to discuss papers, ask questions, and find group
members:
Please use the mailing list if you want to contact the whole course:
- <https://piazza.com/class/ke3clkclul16hq>
- <mailto:compsci763-1-f19@lists.wisc.edu>
All registered students should be on this list. If you are not registered but
would like to follow along, please let me know and I will add you.
Otherwise, you can contact me directly. To ensure that your email goes to the
right place, please start the subject with **CS763**.
You can also contact me directly. To ensure that your email goes to the right
place, please start the subject with **CS763**.
## Course Staff
- **Instructor**: [Justin Hsu](https://justinh.su)
- **Email**: <mailto:justhsu@cs.wisc.edu>
- **Location**: CS 6379
- **Office hours**: By appointment
## Grading
Grades will be posted on Canvas.
- Presentation and summary
- Paper presentation: **15%**
- Presentation summary: **15%**
- **OR:** Paper reviews **(remote only)**
- 16 reviews: **30%**
- Course project
- Milestone 1: **10%**
- Milestone 2: **10%**
- Final project: **50%**
Everything except the final project will be graded on a simple scale: no
submission (0), below expectations (1), meets expectations (2). Assignments that
significantly exceed expectations can receive additional (bonus) points. The
final project will be graded on a **10-point** scale.
### Paper presentations
In groups of two you will lead one lecture, presenting a few related papers and
guiding the discussion; details [here](assignments/presentations.md).
### Presentation reports
In groups of two you will write up a detailed summary of another group's
presentation; details [here](assignments/summaries.md).
### Course Project
The main course component is the **course project**. You will work individually
or in pairs on a topic of your choice, producing a conference-style write-up and
presenting the project at the end of the semester. The best projects may
eventually lead to a research paper or survey. Details can be found
[here](assignments/project.md).
## Accommodations for Remote Students
To provide opportunities for live discussion, lectures will be held
synchronously. To accommodate students attending from other time zones, all
lectures will be recorded and uploaded to BBCU (this may take a few hours).
Students who are not able to attend synchronously will not be able to present a
paper and write a presentation summary. Instead, these students will complete
paper reviews asynchronously, through **HotCRP**:
- <https://wisc-cs763-20.hotcrp.com/>
!!! attention
If you are not able to regularly attend live lectures in your time zone, you
must let me know **during the first week of the course** so I can set up
your account.
## Academic Honesty
**Writing is a central part of this course.** All students are expected to
follow academic honesty standards. In brief: all the text that you submit must
be **in your own words**, and you are not allowed to copy anything---from a
paper, from the internet, from someone else---without full attribution.
If you are completing paper reviews, you should not search for reviews that may
be online---this is expressly **against the course policies**. You should
complete the review as if you were seeing the paper for the first time. Just
like conference reviewing, all paper reviews are to be done **by yourself**: you
should not talk to anyone about the paper until **after** you have submitted it.

View File

@ -1,88 +0,0 @@
Lectures will be loosely organized around three core modules: differential
privacy, adversarial machine learning, and applied cryptography. We will also
cover two advanced modules: algorithmic fairness, and PL and verification
techniques.
This is a graduate seminar, so not all lectures are set in stone and there is
considerable flexibility in the material. If you are interested in something not
covered in the syllabus, please let me know!
## Course Materials
For differential privacy, we will use the textbook *Algorithmic Foundations of
Data Privacy* (AFDP) by Cynthia Dwork and Aaron Roth, available
[here](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf).
## Grading and Evaluation
Grades will be assigned as follows:
- **Paper presentations: 20%**
- **Presentation reports: 20%**
- **Final project: 60%** (Milestones 1 and 2, and final writeup)
These three components are detailed below.
### Paper presentations
In groups of two you will lead one lecture, presenting a few related papers and
guiding the discussion; details [here](assignments/presentations.md).
### Presentation reports
In groups of two you will write up a detailed summary of another group's
presentation; details [here](assignments/summaries.md).
### Course Project
The main course component is the **course project**. You will work individually
or in pairs on a topic of your choice, producing a conference-style write-up and
presenting the project at the end of the semester. The best projects may
eventually lead to a research paper or survey. Details can be found
[here](assignments/project.md).
## Learning Outcomes
By the end of this course, you should be able to...
- Summarize the basic concepts in differential privacy, applied cryptography,
and adversarial machine learning.
- Use techniques from differential privacy to design privacy-preserving data
analyses.
- Grasp the high-level concepts from research literature on the main course
topics.
- Present and lead a discussion on recent research results.
- Carry out an in-depth exploration of one topic in the form of a self-directed
research project.
## Credit Information
This is a **3-credit** graduate seminar. For the first 10 weeks of the fall
semester, we will meet for three 75-minute class periods each week. You should
expect to work on course learning activities for about 3 hours out of classroom
for each hour of class.
## Academic Integrity
The final project may be done in groups of three (or in rare situations, two)
students. Collaborative projects with people outside the class may be allowed,
but check with me first. Everything else you turn in---from homework assignments
to discussion questions---should be **your own work**. Concretely: you may
discuss together, but **you must write up solutions entirely on your own,
without any records of the discussion (physical, digital, or otherwise)**.
## Access and Accommodation
The University of Wisconsin-Madison supports the right of all enrolled students
to a full and equal educational opportunity. The Americans with Disabilities Act
(ADA), Wisconsin State Statute (36.12), and UW-Madison policy (Faculty Document
1071) require that students with disabilities be reasonably accommodated in
instruction and campus life. Reasonable accommodations for students with
disabilities is a shared faculty and student responsibility. Students are
expected to inform me of their need for instructional accommodations by the end
of the third week of the semester, or as soon as possible after a disability has
been incurred or recognized. I will work either directly with you or in
coordination with the McBurney Center to identify and provide reasonable
instructional accommodations. Disability information, including instructional
accommodations as part of a students educational record, is confidential and
protected under FERPA.

View File

@ -101,7 +101,7 @@
USENIX 2019.
- Vitaly Feldman.
[*Does Learning Require Memorization? A Short Tale about a Long Tail*](https://arxiv.org/pdf/1906.05271).
arXiv 2019.
STOC 2020.
### Applied Cryptography
- Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
@ -261,10 +261,15 @@
- Abhinav Verma, Hoang M. Le, Yisong Yue, and Swarat Chaudhuri.
[*Imitation-Projected Programmatic Reinforcement Learning*](https://arxiv.org/pdf/1907.05431).
NeurIPS 2019.
- Kenneth L. McMillan
[*Bayesian Interpolants as Explanations for Neural Inferences*](https://arxiv.org/abs/2004.04198).
arXiv.
# Supplemental Material
- Cynthia Dwork and Aaron Roth.
[*Algorithmic Foundations of Data Privacy*](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf).
- Solon Barocas, Moritz Hardt, and Arvind Narayanan.
[*Fairness and Machine Learning: Limitations and Opportunities*](https://fairmlbook.org/index.html).
- Gilles Barthe, Marco Gaboardi, Justin Hsu, and Benjamin C. Pierce.
[*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf).
- Michael Walfish and Andrew J. Blumberg.
@ -272,7 +277,7 @@
- Véronique Cortier, Steve Kremer, and Bogdan Warinschi.
[*A Survey of Symbolic Methods in Computational Analysis of Cryptographic Systems*](https://hal.inria.fr/inria-00379776/document).
- Dan Boneh and Victor Shoup.
[*A Graduate Course in Applied Cryptography*](https://crypto.stanford.edu/~dabo/cryptobook/BonehShoup_0_4.pdf).
[*A Graduate Course in Applied Cryptography*](http://toc.cryptobook.us/).
- David Hand.
[*Statistics and the Theory of Measurement*](http://www.lps.uci.edu/~johnsonk/CLASSES/MeasurementTheory/Hand1996.StatisticsAndTheTheoryOfMeasurement.pdf).
- Judea Pearl.

View File

@ -1,9 +1,32 @@
---
author: Security and Privacy in Data Science (CS 763)
title: Course Welcome
date: September 04, 2019
date: September 02, 2020
---
# Welcome to Virtual CS 763!
## Norms for virtual class
- Mute yourself when you are not talking
- Recommended (not required): turn on your video
- Use the chat for questions/side discussions
> If you wouldn't do it in a real classroom, you probably shouldn't do it
> virtually.
## Guidelines for discussion
- Basically: **be nice to one another**
- WAIT: Why Am I Talking?
- One mic: one person speaks at a time
## Remote students
- Strongly recommended to attend live lectures
- If you can't (e.g., lecture in the middle of the night):
- All lectures will be recorded on BBCU: watch them
- Do **two paper reviews per week** instead of presentation+summary
> Let me know ASAP if you are remote so I can set you up with paper reviews
# Security and Privacy
## It's everywhere!
@ -40,7 +63,7 @@ date: September 04, 2019
## Five modules
1. Differential privacy
2. Adversarial machine learning
3. Crytpography in machine learning
3. Cryptography in machine learning
4. Algorithmic fairness
5. PL and verification
@ -68,7 +91,7 @@ date: September 04, 2019
![](images/privacy.png)
## A mathematically solid definition of privacy
## A mathematical definition of privacy
- Simple and clean formal property
- Satisfied by many algorithms
- Degrades gracefully under composition
@ -126,28 +149,28 @@ date: September 04, 2019
## Lecture schedule
- First ten weeks: **lectures MWF**
- Intensive lectures, get you up to speed
- M: I will present
- WF: You will present
- I will present once a week
- You will present twice a week
- Last five weeks: **no lectures**
- Intensive work on projects
- I will be available to meet, one-on-one
> You must attend lectures and participate
> You should attend/watch **all** lectures
## Class format
- Three components:
1. Paper presentations
2. Presentation summaries
3. Final project
- Announcement/schedule/materials: on [website](https://pages.cs.wisc.edu/~justhsu/teaching/current/cs763/)
- Class mailing list: [compsci763-1-f19@lists.wisc.edu]()
- Announcement/schedule/materials on [website](https://pages.cs.wisc.edu/~justhsu/teaching/current/cs763/)
- Discussions/forming groups on [Piazza](https://piazza.com/class/ke3clkclul16hq)
## Paper presentations
- In pairs, lead a discussion on group of papers
- See website for [detailed instructions](https://pages.cs.wisc.edu/~justhsu/teaching/current/cs763/assignments/presentations/jjj)
- See website for [detailed instructions](https://pages.cs.wisc.edu/~justhsu/teaching/current/cs763/assignments/presentations/)
- See website for [schedule of topics](https://pages.cs.wisc.edu/~justhsu/teaching/current/cs763/schedule/lectures/)
- One week **before** presentation: meet with me
- Come prepared with presentation materials
- Come prepared with draft slides and outline
- Run through your outline, I will give feedback
## Presentation summaries
@ -159,29 +182,27 @@ date: September 04, 2019
- Writeups will be shared with the class
## Final project
- In groups of three (or very rarely two)
- In groups of 2-3
- See website for [project details](https://pages.cs.wisc.edu/~justhsu/teaching/current/cs763/assignments/project/)
- Key dates:
- **October 11**: Milestone 1
- **November 8**: Milestone 2
- **October 12**: Milestone 1
- **November 6**: Milestone 2
- **End of class**: Final writeups and presentations
## Todos for you
0. Complete the [course survey](https://forms.gle/NvYx3BM7HVkuzYdG6)
0. Complete the [course survey](https://forms.gle/NWAYMf6ZzV3bFKC46)
1. Explore the [course website](https://pages.cs.wisc.edu/~justhsu/teaching/current/cs763/)
2. Think about which lecture you want to present
3. Think about which lecture you want to summarize
4. Form project groups and brainstorm topics
> Signup for slots and projects [here](https://docs.google.com/spreadsheets/d/1hSbRy0mo3PjlozN0Ph1JkP5JwlRG8y7ukuCdorofncA/edit?usp=sharing)
> Sign up for slots and projects [here](https://docs.google.com/spreadsheets/d/1Qiq6RtBiHD6x7t-wPqAykvTDdbbBvZYSMZ9FrKUHKm4/edit?usp=sharing)
## We will move quickly
- First deadline: **next Monday, September 9**
- First deadline: **next Wednesday, September 9**
- Form paper and project groups
- Signup sheet [here](https://docs.google.com/spreadsheets/d/1hSbRy0mo3PjlozN0Ph1JkP5JwlRG8y7ukuCdorofncA/edit?usp=sharing)
- Please: don't sign up for the same slot
- First slot is soon: **next Friday, September 13**
- Only slot for presenting differential privacy
- Signup sheet [here](https://docs.google.com/spreadsheets/d/1Qiq6RtBiHD6x7t-wPqAykvTDdbbBvZYSMZ9FrKUHKm4/edit?usp=sharing)
- First slot is soon: **Monday, September 14**
- I will help the first group prepare
# Defining privacy

View File

@ -5,9 +5,9 @@ The first key date is **September 9**. By this date, you should:
come up with **1-2 sentences** describing your initial direction. This is not
a firm commitment---you can change your topic as you learn more.
The signup sheet is [here](https://docs.google.com/spreadsheets/d/1hSbRy0mo3PjlozN0Ph1JkP5JwlRG8y7ukuCdorofncA/edit?usp=sharing).
The signup sheet is [here](https://docs.google.com/spreadsheets/d/1Qiq6RtBiHD6x7t-wPqAykvTDdbbBvZYSMZ9FrKUHKm4/edit?usp=sharing).
## Project Deadlines
- Milestone 1: **October 11**
- Milestone 2: **November 8**
- Final writeup and presentation: **December 11** (TBD)
- Milestone 1: **October 12**
- Milestone 2: **November 6**
- Final writeup: **December 11**

View File

@ -3,39 +3,41 @@
Date | Topic | Presenters | Summarizers | Notes
:----:|-------|:----------:|:-----------:|:-----:
| <center> <h4> **Differential Privacy** </h4> </center> | | |
9/4 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Reading:** [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) | JH | --- |
9/6 | Basic private mechanisms <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.2-4 | JH | --- |
9/9 | Composition and closure properties <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.5 | JH | --- | [Signups](https://docs.google.com/spreadsheets/d/1hSbRy0mo3PjlozN0Ph1JkP5JwlRG8y7ukuCdorofncA/edit?usp=sharing) Due
9/11 | What does differential privacy actually mean? <br> **Reading:** [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) | JH | --- |
9/13 | Differentially private machine learning <br> **Reading:** [*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022) <br> **Reading:** [*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755) | Robert/Shengwen | Zach/Jialu |
9/2 | [Course welcome](../resources/slides/lecture-welcome.html) <br> **Reading:** [*How to Read a Paper*](https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf) | Justin | --- | [[slides]](../resources/slides/lecture-welcome.html)
9/4 | Basic private mechanisms <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.2-4 | Justin | --- |
9/7 | <center> **NO CLASS: LABOR DAY** </center> | | |
9/9 | Composition and closure properties <br> **Reading:** [Dwork and Roth](https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) 3.5 | Justin | --- | [Signups](https://docs.google.com/spreadsheets/d/1Qiq6RtBiHD6x7t-wPqAykvTDdbbBvZYSMZ9FrKUHKm4/edit?usp=sharing) Due
9/11 | What does differential privacy actually mean? <br> **Reading:** [Lunchtime for Differential Privacy](https://github.com/frankmcsherry/blog/blob/master/posts/2016-08-16.md) | Justin | --- |
9/14 | Private machine learning <br> **Reading:** [*On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches*](https://arxiv.org/pdf/1708.08022) <br> **Reading:** [*Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data*](https://arxiv.org/pdf/1610.05755) | Nathan/Matt T. | Saniya/Marcus |
9/16 | Privately generating synthetic data <br> **Reading:** [*A Simple and Practical Algorithm for Differentially Private Data Release*](https://papers.nips.cc/paper/4548-a-simple-and-practical-algorithm-for-differentially-private-data-release.pdf) <br> **Reading:** [*Private Post-GAN Boosting*](https://arxiv.org/pdf/2007.11934) | Zijian/Yuchen | Deepan/Kendall |
| <center> <h4> **Adversarial Machine Learning** </h4> </center> | |
9/16 | Overview and basic concepts | JH | --- |
9/18 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572) | JH | Robert/Shengwen |
9/20 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) <br> **Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | Somya/Zi | Miru/Pierre |
9/23 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | JH | --- |
9/25 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | Joseph/Nils | Siddhant/Goutham |
9/27 | Adversarial training <br> **Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) <br> **See also:** [*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204) | Siddhant/Goutham | Somya/Zi |
9/18 | Overview and basic concepts | Justin | --- |
9/21 | Adversarial examples <br> **Reading:** [*Intriguing Properties of Neural Networks*](https://arxiv.org/pdf/1312.6199.pdf) <br> **Reading:** [*Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples*](https://arxiv.org/pdf/1605.07277) <br> **See also:** [*Explaining and Harnessing Adversarial Examples*](https://arxiv.org/pdf/1412.6572) | Deepan and Kendall | Keaton/Anna |
9/23 | Data poisoning <br> **Reading:** [*Poisoning Attacks against Support Vector Machines*](https://arxiv.org/pdf/1206.6389) <br> **Reading:** [*Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks*](https://arxiv.org/pdf/1804.00792) | Grishma/Lokit | Amos/Suleman |
9/25 | Defenses and detection: challenges <br> **Reading:** [*Towards Evaluating the Robustness of Neural Networks*](https://arxiv.org/pdf/1608.04644.pdf) <br> **Reading:** [*Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods*](https://arxiv.org/pdf/1705.07263.pdf) | Justin | --- |
9/28 | Certified defenses <br> **Reading:** [*Certified Defenses for Data Poisoning Attacks*](https://arxiv.org/pdf/1706.03691.pdf) <br> **Reading:** [*Certified Defenses against Adversarial Examples*](https://arxiv.org/pdf/1801.09344) | Yucheng/Matt W. | Roger/Zifan |
9/30 | Adversarial training <br> **Reading:** [*Towards Deep Learning Models Resistant to Adversarial Attacks*](https://arxiv.org/pdf/1706.06083.pdf) <br> **See also:** [*Ensemble Adversarial Training: Attacks and Defenses*](https://arxiv.org/pdf/1705.07204) | Nikhil/Scott | Grishma/Lokit |
| <center> <h4> **Applied Cryptography** </h4> </center> | | |
9/30 | Overview and basic constructions <br> **Reading:** [Boneh and Shoup](https://crypto.stanford.edu/~dabo/cryptobook/BonehShoup_0_4.pdf), 11.6, 19.4 <br> **See also:** [Evans, Kolesnikov, and Rosulek](https://securecomputation.org/), Chapter 3 | JH | --- |
10/2 | SMC for machine learning <br> **Reading:** [*Helen: Maliciously Secure Coopetitive Learning for Linear Models*](https://arxiv.org/pdf/1907.07212) <br> **See also:** [*Secure Computation for Machine Learning With SPDZ*](https://arxiv.org/pdf/1901.00329) | Varun/Vibhor/Adarsh | --- |
10/4 | Secure data collection at scale <br> **Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) | Abhirav/Rajan | --- |
10/7 | Verifiable computing <br> **Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | JH | --- |
10/9 | Side channels and implementation issues <br> **Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | JH | --- |
10/11 | Model watermarking <br> **Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) <br> **See also:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf) | Noor/Shashank | Joseph/Nils| MS1 Due
10/2 | Overview and basic constructions <br> **Reading:** [Boneh and Shoup](http://toc.cryptobook.us/), 11.6, 19.4 <br> **See also:** [Evans, Kolesnikov, and Rosulek](https://securecomputation.org/), Chapter 3 | Justin | --- |
10/5 | Secure data collection at scale <br> **Reading:** [*Prio: Private, Robust, and Scalable Computation of Aggregate Statistics*](https://people.csail.mit.edu/henrycg/files/academic/papers/nsdi17prio.pdf) | Saniya/Marcus | Jinwoo/Mazharul |
10/7 | Verifiable computing <br> **Reading:** [*SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud*](https://arxiv.org/pdf/1706.10268) | Mike | Siyang/Dan |
10/9 | Side channels and implementation issues <br> **Reading:** [*On Significance of the Least Significant Bits For Differential Privacy*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.5957&rep=rep1&type=pdf) | Siyang/Dan | Nathan/Matt T. |
10/12 | Model watermarking <br> **Reading:** [*Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring*](https://arxiv.org/pdf/1802.04633) <br> **See also:** [*Protecting Intellectual Property of Deep Neural Networks with Watermarking*](https://gzs715.github.io/pubs/WATERMARK_ASIACCS18.pdf) | Amos/Suleman | Sidharth/Martin | MS1 Due
| <center> <h4> **Algorithmic Fairness** </h4> </center> | | |
10/14 | Overview and basic notions <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/index.html), Chapter 1-2 | JH | --- |
10/16 | Individual and group fairness <br> **Reading:** [*Fairness through Awarness*](https://arxiv.org/pdf/1104.3913) <br> **Reading:** [*Equality of Opportunity in Supervised Learning*](https://arxiv.org/pdf/1610.02413) | JH | Jack/Jack |
10/18 | Inherent tradeoffs <br> **Reading:** [*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807) | Bobby | --- |
10/21 | Defining fairness: challenges <br> **Reading:** [*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104) <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/causal.html), Chapter 4 | JH | Bobby |
10/23 | Fairness in unsupervised learning <br> **Reading:** [*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520) <br> **See also:** [*Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints*](https://arxiv.org/pdf/1707.09457) | Zach/Jialu | Noor/Shashank |
10/25 | Beyond observational measures <br> **Reading:** [*Avoiding Discrimination through Causal Reasoning*](https://arxiv.org/pdf/1706.02744) <br> **See also:** [*Counterfactual Fairness*](https://arxiv.org/pdf/1703.06856) | Nat/Geetika | Varun/Vibhor/Adarsh |
10/14 | Overview and basic notions <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/index.html), Chapter 1-2 <br> **See also:** [*50 Years of Test (Un)fairness: Lessons for Machine Learning*](https://arxiv.org/pdf/1811.10104) | Justin | --- |
10/16 | Individual and group fairness <br> **Reading:** [*Fairness through Awarness*](https://arxiv.org/pdf/1104.3913) <br> **Reading:** [*Equality of Opportunity in Supervised Learning*](https://arxiv.org/pdf/1610.02413) | Sidharth/Martin | Vishal/Nikita |
10/19 | Inherent tradeoffs <br> **Reading:** [*Inherent Trade-Offs in the Fair Determination of Risk Scores*](https://arxiv.org/pdf/1609.05807) | Shiyu/Rita | Rishabh/Aaron |
10/21 | Fairness and causality <br> **Reading:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/causal.html), Chapter 4 | Justin | --- |
10/23 | Fairness in unsupervised learning <br> **Reading:** [*Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings*](https://arxiv.org/pdf/1607.06520) <br> **See also:** [*Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints*](https://arxiv.org/pdf/1707.09457) | Keaton/Anna | Shiyu/Rita |
10/26 | Testing fairness, empirically <br> **Reading:** [*Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination*](https://arxiv.org/pdf/1408.6491.pdf) <br> **Reading:** [*Discrimination through optimization: How Facebooks ad delivery can lead to skewed outcomes*](https://arxiv.org/pdf/1904.02095.pdf) <br> **See also:** [Barocas, Hardt, and Narayanan](https://fairmlbook.org/testing.html), Chapter 5 | Rishabh/Aaron | Mike |
| <center> <h4> **PL and Verification** </h4> </center> | | |
10/28 | Overview and basic notions | JH | --- |
10/30 | Probabilistic programming languages <br> **Reading:** [*Probabilistic Programming*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/fose-icse2014.pdf) | Miru/Pierre | Nat/Geetika |
11/1 | Automata learning and interpretability <br> **Reading:** [*Model Learning*](https://m-cacm.acm.org/magazines/2017/2/212445-model-learning/fulltext) <br> **Reading:** [*Interpreting Finite Automata for Sequential Data*](https://arxiv.org/pdf/1611.07100) | Jack/Jack | Abhirav/Rajan |
11/4 | Programming languages for differential privacy <br> **Reading:** [*Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy*](https://www.cis.upenn.edu/~bcpierce/papers/dp.pdf) <br> **See also:** [*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf) | JH | --- |
11/6 | Verifying neural networks <br> **Reading:** [*AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation*](https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf) <br> **See also:** [*DL2: Training and Querying Neural Networks with Logic*](http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf) | JH | --- |
11/8 | Verifying probabilistic programs <br> **Reading:** [*A Program Logic for Union Bounds*](https://arxiv.org/pdf/1602.05681) <br> **See also:** [*Advances and Challenges of Probabilistic Model Checking*](https://www.prismmodelchecker.org/papers/allerton10.pdf) | JH | | MS2 Due
| <center> <h4> **No&nbsp;Lectures:&nbsp;Work&nbsp;on&nbsp;Projects** </h4> </center> | | |
12/9 | Project Presentations 1 <br> - Nils, Joseph, Abhirav <br> - Robert, Noor, Shashank <br> - Jack L., Geetika <br> - Zi | | |
12/11 | Project Presentations 2 <br> - Vibhor, Varun, Adarsh <br> - Siddhant, Goutham, Somya <br> - Nat, Zach, Jialu <br> - Miru, Pierre, Jack S. <br> - Shengwen, Rajan, Bobby | | | Projects Due
10/28 | Overview and basic notions | Justin | --- |
10/30 | Probabilistic programming languages <br> **Reading:** [*Probabilistic Programming*](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/fose-icse2014.pdf) | Vishal/Nikita | Zijian/Yuchen |
11/2 | Verifying probabilistic programs <br> **Reading:** [*A Program Logic for Union Bounds*](https://arxiv.org/pdf/1602.05681) <br> **See also:** [*Advances and Challenges of Probabilistic Model Checking*](https://www.prismmodelchecker.org/papers/allerton10.pdf) | Jinwoo/Mazharul | Yucheng/Matt W. |
11/4 | Languages for differential privacy <br> **Reading:** [*Privacy Integrated Queries*](https://www.microsoft.com/en-us/research/wp-content/uploads/2009/06/sigmod115-mcsherry.pdf) <br> **See also:** [*Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy*](https://www.cis.upenn.edu/~bcpierce/papers/dp.pdf) <br> **See also:** [*Programming Language Techniques for Differential Privacy*](https://siglog.hosting.acm.org/wp-content/uploads/2016/01/siglog_news_7.pdf) | Ashish/Athena | Nikhil/Scott |
11/6 | Verifying neural networks <br> **Reading:** [*AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation*](https://files.sri.inf.ethz.ch/website/papers/sp2018.pdf) <br> **See also:** [*DL2: Training and Querying Neural Networks with Logic*](http://proceedings.mlr.press/v97/fischer19a/fischer19a.pdf) | Roger/Zifan | Ashish/Athena | MS2 Due
| <center> <h4> **No Lectures: Work on Projects** </h4> </center> | | |
12/4 | <center> **Project Presentations** </center> <br> Grishma, Sidharth, Lokit <br> Saniya, Margaret, Kendall <br> Mike, Zichen, Dong <br> Mazharul <br> Deepan, Siyang <br> Aaron | | |
12/7 | <center> **Project Presentations** </center> <br> Amos, Suleman, Rita <br> Vishal, Nikita, Dan <br> Zijian, Yuchen <br> Ashish, Athena <br> Roger, Zifan | | |
12/9 | <center> **Project Presentations** </center> <br> Anna, Keaton, Shiyu <br> Nathan <br> Jinwoo <br> Martin <br> Nikhil, Scott <br> Rishabh, Matt, Yucheng | | |
12/11 | <center> **PROJECTS DUE** </center> | | | Projects Due

View File

@ -24,3 +24,40 @@ areas, depending on student interest:
- Zero-knowledge proofs
- Secure multi-party computation
- Verifiable computation
## Learning Outcomes
By the end of this course, you should be able to...
- Summarize the basic concepts in differential privacy, applied cryptography,
and adversarial machine learning.
- Use techniques from differential privacy to design privacy-preserving data
analyses.
- Grasp the high-level concepts from research literature on the main course
topics.
- Present and lead a discussion on recent research results.
- Carry out an in-depth exploration of one topic in the form of a self-directed
research project.
## Credit Information
This is a **3-credit** graduate seminar. For the first 10 weeks of the fall
semester, we will meet for three 75-minute class periods each week. You should
expect to work on course learning activities for about 3 hours out of classroom
for each hour of class.
## Access and Accommodation
The University of Wisconsin-Madison supports the right of all enrolled students
to a full and equal educational opportunity. The Americans with Disabilities Act
(ADA), Wisconsin State Statute (36.12), and UW-Madison policy (Faculty Document
1071) require that students with disabilities be reasonably accommodated in
instruction and campus life. Reasonable accommodations for students with
disabilities is a shared faculty and student responsibility. Students are
expected to inform me of their need for instructional accommodations by the end
of the third week of the semester, or as soon as possible after a disability has
been incurred or recognized. I will work either directly with you or in
coordination with the McBurney Center to identify and provide reasonable
instructional accommodations. Disability information, including instructional
accommodations as part of a students educational record, is confidential and
protected under FERPA.

View File

@ -1,25 +1,30 @@
site_name: 'CS 763: Security and Privacy in Data Science (Fall 2019)'
site_name: 'CS 763: Security and Privacy in Data Science (Fall 2020)'
site_url: ''
repo_url: 'https://git.justinh.su/justhsu/cs763'
site_description: 'Course webpage for CS 763: Security and Privacy in Data Science (Fall 2019)'
site_description: 'Course webpage for CS 763: Security and Privacy in Data Science (Fall 2020)'
site_author: 'Justin Hsu'
theme:
name: 'material'
language: 'en'
feature:
tabs: 'true'
features:
- navigation.tabs
- navigation.instant
logo: 'assets/images/favicon.ico'
favicon: 'assets/images/favicon.ico'
palette:
primary: red
accent: red
primary: light blue
accent: light blue
markdown_extensions:
- admonition
- pymdownx.arithmatex
- pymdownx.tilde
nav:
- Home:
- About: 'index.md'
- Syllabus: 'syllabus.md'
- Organization: 'org.md'
- Schedule:
- Lectures: 'schedule/lectures.md'
- Deadlines: 'schedule/deadlines.md'
@ -30,5 +35,5 @@ nav:
- Assignments:
- Presentations: 'assignments/presentations.md'
- Summaries: 'assignments/summaries.md'
- Reviews: 'assignments/reviews.md'
- Projects: 'assignments/project.md'
- Gallery: 'assignments/gallery.md'