ReMemBERT: Recurrent Memory-Augmented BERT for Conversational Text Classification

Recommended citation: Danovitch, J., & SalahEldeen, H. (2019). ReMemBERT: Recurrent Memory-Augmented BERT for Conversational Text Classification. Technical report in preparation.

Supervised by: Hany SalahEldeen

Work primarily completed during internship at Microsoft.

Intelligent assistants (IAs) such as Google Assistant, Alexa, Siri, and Cortana, provide users with a natural language interface to complete many common tasks, such as setting an alarm, sending an email, or playing music, using only a brief utterance (e.g., ”Alexa, play Despacito.”). To move from isolated utterances to a more natural, conversational setting, IAs must be able to understand each turn of dialog within the greater conversational context.

To this end, we propose a memory-based adaption of BERT for tasks which involve classifying utterances within a conversation, such as intent, topic, dialog act, or emotion classification, which we unify as the task of ‘conversational text classification’. We compare our method to several benchmarks, and show that the use of a conversation-aware model improves performance.