Publications

カンファレンス (国際) BanditRank: Learning to Rank Using Contextual Bandits

Phanideep Gampa (IIT Varanasi), Sumio Fujita

the 25th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2021)

2021.5.11

We propose an extensible deep learning method that uses reinforcement learning to train neural networks for oine ranking in information retrieval (IR). We call our method BanditRank as it treats ranking as a contextual bandit problem. In the domain of learning to rank for IR, current deep learning models are trained on objective functions di erent from the measures they are evaluated on. Since most evaluation measures are discrete quantities, they cannot be used by gradient descent algorithms without approximation. BanditRank bridges this gap by directly optimizing a task speci c measure, such as mean average precision (MAP). Speci cally, a contextual bandit whose action is to rank input documents is trained using a policy gradient algorithm to directly maximize a reward. The reward can be a single measure, such as MAP, or a combination of several measures. The notion of ranking is also inherent in BanditRank, similar to the current listwise approaches. To evaluate the e ectiveness of BanditRank by answering ve research questions, we conducted a series of experiments on datasets related to three di erent tasks, i.e., non-factoid, and factoid question answering and web search. We found that BanditRank performed better than strong baseline methods in respective tasks.

Paper : BanditRank: Learning to Rank Using Contextual Bandits新しいタブまたはウィンドウで開く (外部サイト)