Eric Wallace

ericwallace@berkeley.edu | Twitter | Scholar | GitHub | CV



Hi! I am a second-year PhD student at UC Berkeley advised by Dan Klein and Dawn Song. I work on Machine Learning and Natural Language Processing as part of Berkeley AI Research (BAIR), with affiliations in Berkeley NLP, Berkeley Security, and the RISE Lab.

For summer 2021, I am interning at Facebook AI Research (FAIR) with Robin Jia and Douwe Kiela. I did my undergrad at the University of Maryland, where I worked with Jordan Boyd-Graber. In 2019, I interned at AI2 with Matt Gardner and Sameer Singh.

Current Research

Security & Privacy We study vulnerabilities of NLP systems from various adversarial perspectives, including stealing model weights, extracting private training data, poisoning training sets, and manipulating test predictions (adversarial examples). Our current research develops defenses against these vulnerabilities.

Robustness & Generalization We quantify and analyze the robustness of models to test-time distribution shift. We have shown models are brittle to natural, expert-designed, and adversarially-inspired distribution shifts. We attribute many of these failures to issues in the training data, e.g., spurious correlations in classification and question answering datasets.

Interpretability We have analyzed the limitations of interpretation methods and helped to facilitate interpretability research with an open-source toolkit and an EMNLP tutorial. Our current research probes pretrained models using prompts and classifiers, and studies how training examples affect test predictions.

Few-shot Learning We use language models such as GPT-3 for few-shot learning by "prompting" them with training examples. We've shown that few-shot learning can be highly sensitive to the choice of the prompt, and we've mitigated this sensitivity and improved accuracy by automatically designing the prompt template and calibrating predictions.


publications

A recent presentation of my work! Given live at EMNLP 2019.