Eric Wallace | Twitter | Scholar | GitHub | CV

Hi! I am a third-year PhD student at UC Berkeley working on Machine Learning and Natural Language Processing. I am advised by Dan Klein and Dawn Song, and I have affiliations with BAIR, Berkeley NLP, and Berkeley Security. My research is graciously supported by the Apple Scholars in AI/ML Fellowship.

In the past, I interned at FAIR in 2021 with Robin Jia and Douwe Kiela, and also at AI2 in 2019 with Matt Gardner and Sameer Singh. I did my undergrad at the University of Maryland, where I worked with Jordan Boyd-Graber.

Current Research Interests

Security & Privacy I study vulnerabilities of NLP systems from various adversarial perspectives, including stealing model weights, extracting private training data, poisoning training sets, and manipulating test predictions. My current research develops defenses against these vulnerabilities.

Large Language Models I use large language models for few-shot learning by prompting them with training examples. My work has shown that few-shot learning can be highly sensitive to the choice of the prompt, and we've mitigated this sensitivity and improved model accuracy by automatic prompt design and calibration. My current research focuses on making few-shot finetuning simple and efficient.

Robustness & Generalization I analyze the robustness of models to test-time distribution shift. My work has shown that NLP models are brittle to natural, expert-designed, and adversarial shifts. I attribute many of these failures to issues in the training data, e.g., spurious correlations in classification and question answering datasets. My recent work develops new methods for training data collection.