Revisiting Adversarial Autoencoder for Unsupervised Word Translation with Cycle Consistency and Improved Training

Abstract

Adversarial training has shown impressive success in learning bilingual dictionary without any parallel data by mapping monolingual embeddings to a shared space. However, recent work has shown superior performance for non-adversarial methods in more challenging language pairs. In this work, we revisit adversarial autoencoder for unsupervised word translation and propose two novel extensions to it that yield more stable training and improved results. Our method includes regularization terms to enforce cycle consistency and input reconstruction, and puts the target encoders as an adversary against the corresponding discriminator. Extensive experimentations with European and non-European languages show that our method achieves better performance than recently proposed adversarial and non-adversarial approaches and is also competitive with the supervised system.

Publication
In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)
Date
Links