Eric Wallace

Email | Twitter | Scholar | GitHub | CV



Hello! I am a fourth-year PhD student at UC Berkeley working on machine learning and natural language processing. I am advised by Dan Klein and Dawn Song, and I have affiliations with BAIR, Berkeley NLP, and Berkeley Security. My research is supported by the Apple Scholars in AI Fellowship. In the past, I've interned at FAIR, AI2, and I did my undergrad at the University of Maryland.

This semester I'm a co-instructor for Berkeley's CS 288 NLP course. If you are interested in getting involved in research, attending this class is a great place to start. If you are already enrolled in this course, please try to use Edstem for all inquiries. If you are having difficulty enrolling, feel free to contract me via email.

Research

I focus on large language models, security/privacy/trustworthiness in ML, and the intersection of these topics. Some of the directions that my collaborators and I have worked on include:


  →Memorization & Privacy We've shown that language models have a tendency to memorize and regenerate their training data [1,2,3,4] raising concerns regarding user privacy, copyright agreements, GDPR statutes, and more.


  →Prompting We've done some of the early work on "prompting" language models to solve tasks, including methods for prompt design [4,5], parameter efficiency [6], and understanding prompting failure modes [7].


  →Robustness We've demonstrated that NLP systems lack robustness to natural [8] and adversarial distribution shifts [9,10,11], and we have attributed these failures to quality and diversity issues in the training data [12,13,14,15].


  →New Threat Models We've explored and refined new types of adversarial vulnerabilities for NLP systems, including ways to steal models weights [16] and poison training sets [17].


Publications