Imagination-Augmented Natural Language Understanding

Abstract

Human brains integrate linguistic and perceptual information simultaneously to understand natural language and hold the critical ability to render imaginations. Such abilities enable us to construct new abstract concepts or concrete objects and are essential in involving applicable knowledge to solve problems in low-resource scenarios.  However, most existing methods for Natural Language Understanding (NLU) are mainly focused on textual signals. They do not simulate human visual imagination ability, which hinders models from inferring and learning efficiently from limited data samples.Therefore, we introduce an Imagination-Augmented Cross-modal Encoder (iACE) to solve natural language understanding tasks from a novel learning perspective---imagination-augmented cross-modal understanding. iACE enables visual imagination with the external knowledge transferred from the powerful generative model and pre-trained vision-and-language model.Extensive experiments on GLUE and SWAG datasets show that iACE achieves consistent improvement over visually-supervised pre-trained models. More importantly, results in extreme and normal few-shot settings validate the effectiveness of iACE in low-resource natural language understanding circumstances.

ICB Affiliated Authors

Authors
Yujie Lu , Wanrong Zhu, Xin Eric Wang, Miguel Eckstein, William Yang Wang
Date
Type
Peer-Reviewed Conference Presentation
Journal
ACL Anthology
Volume
Proceedings of 2022 Annual Conference of the North American Chapter of the ACL (NAACL)
Number
2022.naacl-main.326
Pages
4392–4402
City
Seattle
State
Washington
Country
United States
Emblems