Eric Wallace // Scholar // GitHub // Twitter

I am currently at AI2 working with Matt Gardner and Sameer Singh (UC Irvine). In Fall 2019, I will begin a Ph.D. at UC Berkeley with Dan Klein and Dawn Song.

My current research focuses on interpreting, attacking, and understanding machine learning models, especially for NLP. For example, designing new interpretation methods [1,2], understanding when and why neural models fail [3], and crafting adversarial attacks for NLP systems [4].

I previously attended the University of Maryland, where I worked with Jordan Boyd-Graber. I interned at AI2 in 2019 and Lyft Self Driving in 2018.

  • April 2019: Our paper Trick Me If You Can was accepted to TACL 2019.
  • Feb. 2019: New preprint investigating a second-order interpretation method for neural networks.
  • Jan. 2019: Graduated from UMD and moved to the Allen Institute for AI (AI2).
  • Nov. 2018: Presented two works at EMNLP 2018 in Brussels, Belgium. Video here.
  • Sep. 2018: We're hosting a competition to develop Question Answering systems that can combat adversarial users
  • Aug. 2018: Paper on a new method to interpret neural NLP models accepted at EMNLP Interpretability Workshop
  • Aug. 2018: Paper on the difficulties of interpreting neural models accepted at EMNLP
  • May. 2018: Joining Lyft Self Driving as an intern this Summer in Palo Alto, CA
  • May. 2018: Paper accepted at ACL Student Research Workshop

active research

Robustness in Deep Natural Language Processing
Models deployed "in the wild" face data distributions unlike those seen during training. How can we develop language systems that are robust against adversaries and noisy users?

Interpretable Language Systems
A central issue when applying neural networks to sensitive domains is test-time interpretability: how can humans understand the reasoning behind neural network predictions?

Fantastic Collaborators: Shi Feng, Sewon Min, Yizhong Wang, Matt Gardner, Sameer Singh, Jordan Boyd-Graber, and many others

Mohit makes great websites