Arrow Research search
Back to AAAI

AAAI 2018

Using k-Way Co-Occurrences for Learning Word Embeddings

Conference Paper Main Track: NLP and Machine Learning Artificial Intelligence

Abstract

Co-occurrences between two words provide useful insights into the semantics of those words. Consequently, numerous prior work on word embedding learning has used cooccurrences between two words as the training signal for learning word embeddings. However, in natural language texts it is common for multiple words to be related and cooccurring in the same context. We extend the notion of cooccurrences to cover k(≥2)-way co-occurrences among a set of k-words. Specifically, we prove a theoretical relationship between the joint probability of k(≥2) words, and the sum of 2 norms of their embeddings. Next, we propose a learning objective motivated by our theoretical result that utilises k-way co-occurrences for learning word embeddings. Our experimental results show that the derived theoretical relationship does indeed hold empirically, and despite data sparsity, for some smaller k(≤5) values, k-way embeddings perform comparably or better than 2-way embeddings in a range of tasks.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
540765203895829046