Eric Wallace

ericwallace@berkeley.edu | Twitter | Scholar | GitHub | CV



Hi! I am a second-year PhD student at UC Berkeley advised by Dan Klein and Dawn Song. I work on Machine Learning and Natural Language Processing as part of Berkeley NLP, the RISE Lab, and Berkeley AI Research (BAIR).

Before this, I did my undergrad at the University of Maryland, where I worked with Jordan Boyd-Graber. I spent most of 2019 working at the Allen Institute for AI with Matt Gardner and Sameer Singh.

Research Focus

Robustness We study how an adversary or a distribution shift can affect model behavior. We have identified insightful model errors using attacks such as appending universal triggers, reducing inputs, and manually editing inputs. Our current research looks to defend real-world systems and also studies new attack vectors such as model stealing and data poisoning.

Interpretability We look to open up the black box of machine learning by interpreting model predictions. We have analyzed the limitations of interpretation methods and helped to facilitate interpretability research with an open-source toolkit and EMNLP tutorial. Our current research probes pretrained models and studies how training examples affect test predictions.

Dataset Quality We study how "bad data" can lead to undesirable model behavior. We have analyzed techniques for discovering spurious dataset patterns and identified issues in existing annotation schemes. Our current research creates better evaluation sets and studies worst-case training examples.


publications

A recent presentation of my work!