Rearrange.
This commit is contained in:
parent
59fcc4e0bb
commit
28091ba43e
|
@ -1,41 +0,0 @@
|
|||
Security and Privacy are rapidly emerging as critical research areas.
|
||||
Vulnerabilities in software are found and exploited almost everyday
|
||||
and with increasingly serious consequences (e.g., the Equifax massive data
|
||||
breach). Moreover, our private data is increasingly at risk and thus
|
||||
techniques that enhance privacy of sensitive data (known as
|
||||
privacy-enhancing technologies (PETS)) are becoming increasingly
|
||||
important. Also, machine-learning (ML) is increasingly being utilized to
|
||||
make decisions in critical sectors (e.g., health care, automation, and
|
||||
finance). However, in deploying these algorithms presence of malicious
|
||||
adversaries is generally ignored.
|
||||
|
||||
This advanced topics class will tackle techniques related to all these
|
||||
themes. We will investigate techniques to make software more secure.
|
||||
Techniques for ensuring privacy of sensitive data will also be
|
||||
covered. Adversarial ML (what happens to ML algorithms in the
|
||||
presence of adversaries?) will be also be discussed. List of some
|
||||
topics that we will cover (obviously not complete) are given below.
|
||||
|
||||
Software Security
|
||||
- Secure information flow
|
||||
- Finding vulnerabilities
|
||||
- Defensive measures and mitigations
|
||||
|
||||
Differential Privacy
|
||||
- Basic mechanisms
|
||||
- Local Differential Privacy
|
||||
|
||||
Cryptographic Techniques
|
||||
- Zero-knowledge proofs
|
||||
- Secure multi-party computation
|
||||
- Verifiable computation
|
||||
|
||||
Adversarial Machine Learning
|
||||
- Training-time attacks
|
||||
- Test-time attacks
|
||||
- Model theft attacks
|
||||
|
||||
Grading will be based on three components:
|
||||
- Reading research papers and writing reviews
|
||||
- Homeworks
|
||||
- Class project
|
37
previous.md
37
previous.md
|
@ -1,37 +0,0 @@
|
|||
Security and Privacy are emerging as very important research areas.
|
||||
Vulnerabilities in software are found and exploited almost everyday
|
||||
and with disastrous consequences (e.g., the Equifax massive data
|
||||
breach). Moreover, our private data is increasingly at risk and thus
|
||||
techniques that enhance privacy of sensitive data (known as
|
||||
privacy-enhancing technologies (PETS)) are becoming increasingly
|
||||
important. Also, machine-learning (ML) is increasingly being utilized to
|
||||
make decisions in critical sectors (e.g., health care, automation, and
|
||||
finance). However, in deploying these algorithms presence of malicious
|
||||
adversaries is generally ignored.
|
||||
|
||||
This advanced topics class will tackle techniques related to all these
|
||||
themes. We will investigate techniques to make software more secure.
|
||||
Techniques for ensuring privacy of sensitive data will also be
|
||||
covered. Adversarial ML (what happens to ML algorithms in the
|
||||
presence of adversaries?) will be also be discussed. List of some
|
||||
topics that we will cover (obviously not complete) are given below.
|
||||
|
||||
Software Security:
|
||||
- Information flow
|
||||
- Techniques for finding vulnerabilities in software
|
||||
- Defense techniques (e.g., control-flow integrity)
|
||||
|
||||
Privacy:
|
||||
- Differential Privacy
|
||||
- Zero-knowledge proofs
|
||||
- Secure multi-party computation
|
||||
|
||||
Adversarial ML:
|
||||
- Training-time attacks
|
||||
- Test-time attacks
|
||||
- Model Theft attacks
|
||||
|
||||
Grading: There are three components that relate to grading:
|
||||
- Reading research papers and writing reviews.
|
||||
- Few homeworks.
|
||||
- Class project.
|
41
syllabus.md
41
syllabus.md
|
@ -0,0 +1,41 @@
|
|||
Security and Privacy are rapidly emerging as critical research areas.
|
||||
Vulnerabilities in software are found and exploited almost everyday
|
||||
and with increasingly serious consequences (e.g., the Equifax massive data
|
||||
breach). Moreover, our private data is increasingly at risk and thus
|
||||
techniques that enhance privacy of sensitive data (known as
|
||||
privacy-enhancing technologies (PETS)) are becoming increasingly
|
||||
important. Also, machine-learning (ML) is increasingly being utilized to
|
||||
make decisions in critical sectors (e.g., health care, automation, and
|
||||
finance). However, in deploying these algorithms presence of malicious
|
||||
adversaries is generally ignored.
|
||||
|
||||
This advanced topics class will tackle techniques related to all these
|
||||
themes. We will investigate techniques to make software more secure.
|
||||
Techniques for ensuring privacy of sensitive data will also be
|
||||
covered. Adversarial ML (what happens to ML algorithms in the
|
||||
presence of adversaries?) will be also be discussed. List of some
|
||||
topics that we will cover (obviously not complete) are given below.
|
||||
|
||||
Software Security
|
||||
- Secure information flow
|
||||
- Finding vulnerabilities
|
||||
- Defensive measures and mitigations
|
||||
|
||||
Differential Privacy
|
||||
- Basic mechanisms
|
||||
- Local Differential Privacy
|
||||
|
||||
Cryptographic Techniques
|
||||
- Zero-knowledge proofs
|
||||
- Secure multi-party computation
|
||||
- Verifiable computation
|
||||
|
||||
Adversarial Machine Learning
|
||||
- Training-time attacks
|
||||
- Test-time attacks
|
||||
- Model theft attacks
|
||||
|
||||
Grading will be based on three components:
|
||||
- Reading research papers and writing reviews
|
||||
- Homeworks
|
||||
- Class project
|
Reference in New Issue