2018-06-03 23:02:46 +00:00
|
|
|
Security and Privacy are rapidly emerging as critical research areas.
|
|
|
|
Vulnerabilities in software are found and exploited almost everyday
|
|
|
|
and with increasingly serious consequences (e.g., the Equifax massive data
|
|
|
|
breach). Moreover, our private data is increasingly at risk and thus
|
|
|
|
techniques that enhance privacy of sensitive data (known as
|
|
|
|
privacy-enhancing technologies (PETS)) are becoming increasingly
|
|
|
|
important. Also, machine-learning (ML) is increasingly being utilized to
|
|
|
|
make decisions in critical sectors (e.g., health care, automation, and
|
|
|
|
finance). However, in deploying these algorithms presence of malicious
|
|
|
|
adversaries is generally ignored.
|
|
|
|
|
|
|
|
This advanced topics class will tackle techniques related to all these
|
|
|
|
themes. We will investigate techniques to make software more secure.
|
|
|
|
Techniques for ensuring privacy of sensitive data will also be
|
|
|
|
covered. Adversarial ML (what happens to ML algorithms in the
|
|
|
|
presence of adversaries?) will be also be discussed. List of some
|
|
|
|
topics that we will cover (obviously not complete) are given below.
|
|
|
|
|
|
|
|
Software Security
|
|
|
|
- Secure information flow
|
|
|
|
- Finding vulnerabilities
|
|
|
|
- Defensive measures and mitigations
|
|
|
|
|
|
|
|
Differential Privacy
|
|
|
|
- Basic mechanisms
|
|
|
|
- Local Differential Privacy
|
|
|
|
|
|
|
|
Cryptographic Techniques
|
|
|
|
- Zero-knowledge proofs
|
|
|
|
- Secure multi-party computation
|
|
|
|
- Verifiable computation
|
|
|
|
|
|
|
|
Adversarial Machine Learning
|
|
|
|
- Training-time attacks
|
|
|
|
- Test-time attacks
|
|
|
|
- Model theft attacks
|
|
|
|
|
|
|
|
Grading will be based on three components:
|
|
|
|
- Reading research papers and writing reviews
|
|
|
|
- Homeworks
|
|
|
|
- Class project
|