Publications

CONFERENCE (INTERNATIONAL) JGLUE: Japanese General Language Understanding Evaluation

Kentaro Kurihara (Waseda University), Daisuke Kawahara (Waseda University), Tomohide Shibata

the 13th Edition of its Language Resources and Evaluation Conference (LREC2022)

June 22, 2022

To develop high-performance natural language understanding (NLU) models, evaluation and analysis of such models have been actively conducted by comprehensively solving multiple types of NLU tasks or a benchmark. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as the Chinese version of CLUE and the French version of FLUE; there is no such benchmark in Japanese. We build a Japanese version of GLUE, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.

Paper : JGLUE: Japanese General Language Understanding Evaluationopen into new tab or window (external link)