CON-S2V: A Generic Framework for Incorporating Extra-Sentential Context into Sen2Vec

Abstract

We present a novel approach to learn distributed representation of sentences from unlabeled data by modeling both content and context of a sentence. The content model learns sentence representation by predicting its words. On the other hand, the context model comprises a neighbor prediction component and a regularizer to model distributional and proximity hypotheses, respectively. We propose an online algorithm to train the model components jointly. We evaluate the models in a setup, where contextual information is available. The experimental results on tasks involving classification, clustering, and ranking of sentences show that our model outperforms the best existing models by a wide margin across multiple datasets.

Publication
In Proceedings of the European Conference on Machine Learning (ECML-PKDD-2017), Macedonia, Skopje
Date
Links