WORKSHOP (INTERNATIONAL) University of Tsukuba Team at the TREC 2023 Deep Learning Track

KAIYU YANG (Tsukuba univ.), SUMIO FUJITA, HAITAO YU (Tsukuba univ.), HIDEO JOHO (Tsukuba univ.)

The Thirty-Second Text REtrieval Conference (TREC 2023)

March 01, 2024

This manuscript delineates our active participation in the 2023 Text Retrieval Conference (TREC) Deep Learning Track, with an acute focus on the Passage Ranking Task. We embarked on an extensive evaluation by submitting three automated runs to gauge the proficiency of Large Language Models (LLMs) in the arena of passage reranking. Our exploration encompasses the use of API-integrated models such as ChatGPT and standalone offline LLMs, employing disparate strategies for the re-ranking task. For ChatGPT, a listwise re-ranking strategy is meticulously implemented to yield the final results. In contrast, offline LLMs are engaged using a method termed pairwise ranking prompting for the re-ranking task. Preliminary results evince superior accuracy and efficiency in utilizing LLMs for re-ranking tasks. Furthermore, our investigation unveils a significant impact of diverse re-ranking strategies on the ultimate ranking outcomes, offering invaluable insights for future research and applications in this domain.

Paper : University of Tsukuba Team at the TREC 2023 Deep Learning Trackopen into new tab or window (external link)