Eric Wallace

ewallac2@umd.edu // CV // Scholar // Blog // GitHub



I am an undergraduate student at the University of Maryland working at the intersection of Deep Learning and NLP. I am advised by Jordan Boyd-Graber and a member of the Computational Linguistics and Information Processing Lab. In January 2019, I will move to AI2.

My broad research interest is learning representations from raw data (text, video, audio) to solve Machine Learning problems. Recently, I have focused on building robust, customizable, and interpretable systems for language. I am excited about techniques in adversarial learning, meta-learning, and reinforcement learning.

Previously I conducted research in GPU Computing and Computational Aerodynamics at the Alfred Gessow Rotorcraft Center underneath Ananth Sridharan and Inderjit Chopra. I interned at Lyft Self Driving in Summer 2018 and was an intern at Intel in Fall 2017.


  • Dec. 2018: Alvin will present our paper at the Black in AI Workshop at NeurIPS 2018.
  • Nov. 2018: Presented two works at EMNLP 2018 in Brussels, Belgium.
  • Sep. 2018: We're hosting a competition to develop Question Answering systems that can combat adversarial users
  • Aug. 2018: Paper on a new method to interpret neural NLP models accepted at EMNLP Interpretability Workshop
  • Aug. 2018: Paper on the difficulties of interpreting neural models accepted at EMNLP
  • May. 2018: Joining Lyft Self Driving as an intern this Summer in Palo Alto, CA
  • May. 2018: Paper accepted at ACL Student Research Workshop
  • Jan. 2018: Presented work on GPU parallelization at 2018 AIAA Scitech in Orlando
  • Nov. 2017: Talk at Google DeepMind Starcraft 2 AI Workshop in Anaheim, California
  • May. 2017: Talk for Aerospace Board of Advisors at the University of Maryland
  • Apr. 2017: Won Best Paper at the AIAA Student Conference hosted by the University of Virginia


active research

Robustness in Deep Natural Language Processing
Models deployed "in the wild" face data distributions unlike those seen during training. How can we develop language systems that are robust against adversaries and noisy users?

Interpretable Language Systems
A central issue when applying neural networks to sensitive domains is test-time interpretability: how can humans understand the reasoning behind neural network predictions?


Fantastic Collaborators: Shi Feng, Mohit Iyyer, Pedro Rodriguez, Soheil Feizi, Jordan Boyd-Graber, Hal Daumé III, and others

Mohit makes great websites