Publications

カンファレンス (国際) Incremental Skip-gram Model with Negative Sampling

Nobuhiro Kaji, Hayato Kobayashi

Empirical Methods in Natural Language Processing (EMNLP2017)

2017.9.8

This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives. Existing methods of neural word embeddings, including SGNS, are multi-pass algorithms and thus cannot perform incremental model update. To address this problem, we present a simple incremental extension of SGNS and provide a thorough theoretical analysis to demonstrate its validity. Empirical experiments demonstrated the correctness of the theoretical analysis as well as the practical usefulness of the incremental algorithm.

Paper : Incremental Skip-gram Model with Negative Sampling新しいタブまたはウィンドウで開く (外部サイト)

PDF : Incremental Skip-gram Model with Negative Sampling