Publications

WORKSHOP (INTERNATIONAL) Towards Incorporating Personalized Context for Conversational Information Seeking

Haitao Yu (Tsukuba univ.), Lingzhen Zheng (Tsukuba univ.), Kaiyu Yang (Tsukuba univ.), Sumio Fujita, Hideo Joho (Tsukuba univ.)

The 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2024)

July 18, 2024

Conversational information seeking (CIS) extends the classic search to a conversational nature, which has attracted significant attention in recent years. Yet one size does not fit all, it is no surprise that users often need high-quality personalized response due to their different personas, e.g., for the search about alternatives to cow’s milk, the desired responses may differ a lot. In this work, we focus on CIS that aims to account for personalized retrieval and response generation. Specifically, we follow the CIS paradigm presented in the TREC iKAT track, which consists of three core tasks, namely personal textual knowledge base (PTKB) statement ranking, passage ranking, and response generation. For PTKB statement ranking, we propose to fuse multiple large language models (LLMs). For passage ranking, we propose four different strategies for personalized retrieval. For response generation, we resort to zero-short LLMbased answer generation by incorporating personalized context. The experimental results show that: (1) For PTKB statement ranking, our method achieves the best performance in terms of MRR on the set of iKAT organizers’ assessments. It also shows superior performance over the baseline based on GPT-4. This indicates that a fusion of multiple LLMs is a promising choice when tackling problems of this kind. (2) For passage ranking, on one hand, one of our proposed strategies is able to achieve comparable performance as Llama2-based baseline. On the other hand, our analysis indicates that the way of incorporating PTKB statements for personalized retrieval matters, where a direct concatenation is not recommended. (3) For response generation, our proposed method is able to generate grounded and natural personalized responses, and is comparable to the top-tier LLM-based baseline.

Paper : Towards Incorporating Personalized Context for Conversational Information Seekingopen into new tab or window (external link)