Hi! I am a second-year PhD student at UC Berkeley advised by Dan Klein and Dawn Song. I work on Machine Learning and Natural Language Processing as part of Berkeley NLP, the RISE Lab, and Berkeley AI Research (BAIR).
Before this, I did my undergrad at the University of Maryland, where I worked with Jordan Boyd-Graber. I spent most of 2019 working at the Allen Institute for AI with Matt Gardner and Sameer Singh.
Robustness We study how an adversary or a distribution shift can affect model behavior. We have an attack called Universal Adversarial Triggers that exposes insightful failures of state-of-the-art NLP systems. We have revealed counterintuitive model behavior on reduced and human-modified inputs. Our current research works to increase model robustness and studies new attack vectors such as model stealing and data poisoning.
Interpretability We look to open up the black box of machine learning by interpreting model predictions. We facilitate research and adoption of interpretation methods with our open-source toolkit (also see our EMNLP tutorial). We have analyzed the limitations of interpretation methods and proposed new saliency map methods. Our current research probes pretrained models and studies how training examples affect test predictions.
Dataset Biases We investigate how models can use spurious dataset patterns to achieve high accuracy without true understanding. We have analyzed techniques for discovering such patterns and identified issues in existing annotation schemes. Our current research looks to create better model evaluations.