Publications

カンファレンス (国際) Better Exploiting Latent Variables in Text Modeling

Canasai Kruengkrai

The 57th Annual Meeting of the Association for Computational Linguistics poster (ACL-2019)

2019.7.31

We show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging. Consistent gains in performance on two different datasets, Penn Treebank and Yahoo, indicate the generalizability of our method.

Software Download (28KB)

Paper : Better Exploiting Latent Variables in Text Modeling新しいタブまたはウィンドウで開く (外部サイト)