justin-site/content/about.md

58 lines
2.9 KiB
Markdown

I am an assistant professor of [Computer Science](https://www.cs.wisc.edu/) at
the [University of Wisconsin--Madison](https://www.wisc.edu).
**I am currently looking for good students!**
Previously, I was a postdoc in the [Department of Computer
Science](https://www.cs.cornell.edu/) at [Cornell
University](https://www.cornell.edu/), hosted by [Nate
Foster](http://www.cs.cornell.edu/~jnfoster/), [Bobby
Kleinberg](http://www.cs.cornell.edu/~rdk/), and [Dexter
Kozen](http://www.cs.cornell.edu/~kozen/), and in the [Programming Principles,
Logic, and Verification Group](http://pplv.cs.ucl.ac.uk/welcome/) at the
[University College London](https://www.ucl.ac.uk/), hosted by [Alexandra
Silva](http://www.alexandrasilva.org/). I was a graduate student in the
[Department of Computer Science](https://cis.upenn.edu) at the [University of
Pennsylvania](https://www.upenn.edu), where I was very fortunate to be advised
by [Benjamin Pierce](https://cis.upenn.edu/~bcpierce) and [Aaron
Roth](https://cis.upenn.edu/~aaroth).
## Research Interests ##
I design methods to **formally verify** that programs are correct, especially
programs that use **randomization**. Such programs can be easy to show correct
on paper, but surprisingly challenging for computers to analyze. Accordingly,
my research blends ideas from two classical areas of computer science:
**randomized algorithms** from theoretical computer science (**TCS**) and
**formal verification**.
Drawing inspiration from how humans reason about randomized algorithms, we can
build simpler and more automated verification techniques. In the past, I've
applied this approach to properties like **accuracy**, **incentive
compatibility**, Markov chain **mixing**, and various notions of **algorithmic
stability**.
A particular focus of my work has been [**differential
privacy**](https://en.wikipedia.org/wiki/Differential_privacy), a rigorous
definition of privacy that is currently under extensive study.
I have investigated a variety of formal methods---such as [**type
systems**](https://en.wikipedia.org/wiki/Type_system) and [**program
logics**](https://en.wikipedia.org/wiki/Hoare_logic)---to verify that programs
are differentially private.
From a more traditional algorithms perspective, I am also interested in applying
differential privacy to optimization, machine learning, and mechanism design.
## Teaching ##
- **Security and Privacy in Data Science (CS 763)**: [F19](https://pages.cs.wisc.edu/~justhsu/teaching/current/cs763/)
- **Introduction to the Theory and Design of PL (CS 538)**: [S19](https://pages.cs.wisc.edu/~justhsu/teaching/s19/cs538/)
- **Topics in Security and Privacy Technologies (CS 839)**: [F18](https://pages.cs.wisc.edu/~justhsu/teaching/f18/cs839/)
## Service ##
- **2020** AAAI, CSF, LAFI, WoLLIC, PLMW
- **2019** POPL, PLMW, POST, CSF, DARS (co-chair)
- **2018** LICS, WWW
- **2017** FCS, TPDP, MFPS
- **2016** PLDI (ERC)