Eric Wallace

ericwallace@berkeley.edu | Twitter | Scholar | GitHub | CV



Hi! I am a second-year PhD student at Berkeley advised by Dan Klein and Dawn Song. I work on Machine Learning and Natural Language Processing as part of Berkeley AI Research (BAIR), with affiliations in Berkeley NLP, Berkeley Security, and the RISE Lab.

For 2021, I am interning at Facebook AI Research (FAIR) with Robin Jia and Douwe Kiela. I did my undergrad at the University of Maryland, where I worked with Jordan Boyd-Graber. In 2019, I interned at AI2 with Matt Gardner and Sameer Singh.

If you are an undergrad, I am happy to give advice on getting started in research, applying to PhD programs, etc., please feel free to email me! I am also excited to talk to other researchers and graduate students who share similar research interests.

Current Research

Security & Privacy We study vulnerabilities of NLP systems from various adversarial perspectives, including stealing model weights, extracting private training data, poisoning training sets, and manipulating test predictions (adversarial examples). Our current research develops defenses against these vulnerabilities.

Robustness & Generalization We quantify and analyze the robustness of models to test-time distribution shift. We have shown models are brittle to natural, expert-designed, and adversarially-inspired distribution shifts. We attribute many of these failures to issues in the training data, e.g., spurious correlations in classification and question answering datasets.

Interpretability We have analyzed the limitations of interpretation methods and helped to facilitate interpretability research with an open-source toolkit and an EMNLP tutorial. Our current research probes pretrained models using prompts and classifiers, and studies how training examples affect test predictions.


publications

A recent presentation of my work! Given live at EMNLP 2019.