Skip to main content

Research Repository

Advanced Search

Domain transfer for deep natural language generation from abstract meaning representations

Dethlefs, Nina

Authors



Abstract

Stochastic natural language generation systems that are trained from labelled datasets are often domainspecific in their annotation and in their mapping from semantic input representations to lexical-syntactic outputs. As a result, learnt models fail to generalize across domains, heavily restricting their usability beyond single applications. In this article, we focus on the problem of domain adaptation for natural language generation. We show how linguistic knowledge from a source domain, for which labelled data is available, can be adapted to a target domain by reusing training data across domains. As a key to this, we propose to employ abstract meaning representations as a common semantic representation across domains. We model natural language generation as a long short-term memory recurrent neural network encoderdecoder, in which one recurrent neural network learns a latent representation of a semantic input, and a second recurrent neural network learns to decode it to a sequence of words. We show that the learnt representations can be transferred across domains and can be leveraged effectively to improve training on new unseen domains. Experiments in three different domains and with six datasets demonstrate that the lexical-syntactic constructions learnt in one domain can be transferred to new domains and achieve up to 75-100% of the performance of in-domain training. This is based on objective metrics such as BLEU and semantic error rate and a subjective human rating study. Training a policy from prior knowledge from a different domain is consistently better than pure in-domain training by up to 10%.

Citation

Dethlefs, N. (2017). Domain transfer for deep natural language generation from abstract meaning representations. IEEE computational intelligence magazine, 12(3), 18-28. https://doi.org/10.1109/mci.2017.2708558

Journal Article Type Article
Acceptance Date May 15, 2017
Online Publication Date Jul 18, 2017
Publication Date 2017-08
Deposit Date Jul 25, 2017
Publicly Available Date Mar 29, 2024
Journal IEEE computational intelligence magazine
Print ISSN 1556-603X
Publisher Institute of Electrical and Electronics Engineers
Peer Reviewed Peer Reviewed
Volume 12
Issue 3
Pages 18-28
DOI https://doi.org/10.1109/mci.2017.2708558
Keywords Semantics; Natural languages; Training; Stochastic processes; Pragmatics; Adaptation models; Machine learning; Natural language processing
Public URL https://hull-repository.worktribe.com/output/453764
Publisher URL http://ieeexplore.ieee.org/document/7983466/
Additional Information This is the author's accepted version of the article: N. Dethlefs, "Domain Transfer for Deep Natural Language Generation from Abstract Meaning Representations," in IEEE Computational Intelligence Magazine, vol. 12, no. 3, pp. 18-28, Aug. 2017.

Files

Article.pdf (414 Kb)
PDF

Copyright Statement
© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.






You might also like



Downloadable Citations