Eric Wallace

ewallac2@umd.edu // CV // Scholar // Blog // GitHub // Twitter



I am in between an undergrad and PhD working at AI2 with Matt Gardner and Sameer Singh (UC Irvine). I recently graduated from the University of Maryland, where I worked with Jordan Boyd-Graber.

My broad research interest is learning representations from raw data to solve Machine Learning problems. Recently, I have focused on building robust, customizable, and interpretable deep learning systems for NLP. I am excited about techniques in adversarial learning, meta-learning, and reinforcement learning.

Previously I conducted research in GPU Computing and Computational Aerodynamics at the Alfred Gessow Rotorcraft Center underneath Ananth Sridharan and Inderjit Chopra. I interned at Lyft Self Driving in Summer 2018 and was an intern at Intel in Fall 2017.


  • Feb. 2019: New preprint investigating a second-order interpretation method for neural networks.
  • Jan. 2019: Graduated from UMD and moved to the Allen Institute for AI (AI2).
  • Dec. 2018: Alvin will present our paper at the Black in AI Workshop at NeurIPS 2018.
  • Nov. 2018: Presented two works at EMNLP 2018 in Brussels, Belgium. Video here.
  • Sep. 2018: We're hosting a competition to develop Question Answering systems that can combat adversarial users
  • Aug. 2018: Paper on a new method to interpret neural NLP models accepted at EMNLP Interpretability Workshop
  • Aug. 2018: Paper on the difficulties of interpreting neural models accepted at EMNLP
  • May. 2018: Joining Lyft Self Driving as an intern this Summer in Palo Alto, CA
  • May. 2018: Paper accepted at ACL Student Research Workshop
  • Jan. 2018: Presented work on GPU parallelization at 2018 AIAA Scitech in Orlando
  • Nov. 2017: Talk at Google DeepMind Starcraft 2 AI Workshop in Anaheim, California
  • May. 2017: Talk for Aerospace Board of Advisors at the University of Maryland
  • Apr. 2017: Won Best Paper at the AIAA Student Conference hosted by the University of Virginia


active research

Robustness in Deep Natural Language Processing
Models deployed "in the wild" face data distributions unlike those seen during training. How can we develop language systems that are robust against adversaries and noisy users?

Interpretable Language Systems
A central issue when applying neural networks to sensitive domains is test-time interpretability: how can humans understand the reasoning behind neural network predictions?


Fantastic Collaborators: Shi Feng, Mohit Iyyer, Soheil Feizi, Jordan Boyd-Graber, Hal Daumé III, and others

Mohit makes great websites