Publications
論文誌 (国内) Image Style Transfer Model Retrieval using Transformer Encoder and Learning to Rank
Huu-Long Pham (Hyougo university), Yoshiyuki Shoji (Shizuoka university), Sumio Fujita, Hiroaki Ohshima (Hyougo university)
情報処理学会論文誌データベース (IPSJ-TOD)
2026.1.26
Image style transfer, a prominent application of generative AI, has significantly impacted digital content creation. However, the proliferation of style transfer models makes it challenging for users to search and find a model that generates a desired style. Usually, searching for an appropriate model is a manually intensive and computationally prohibitive task. To address this, we propose an effective method for style transfer model retrieval. Our method takes an image exhibiting a target style as a query and ranks a collection of available models based on their ability to reproduce that style. Our approach uses a Vision Transformer to encode the query image’s features into patch-level embeddings. Concurrently, style models are represented as learnable embedding vectors. A transformer encoder then fuses query image and model embeddings to compute a relevance score, indicating the model’s suitability for generating style similar to the query image’s. Our method is optimized end-to-end using a hybrid loss function that combines Binary Cross Entropy (BCE) with a Learning-to-Rank objective. To evaluate the proposed method, we constructed a benchmark dataset of 10,000 images, generated from 100 unique style models applied to 100 distinct content images. Our experiments demonstrate the effectiveness of the proposed method’s retrieval performance as well as its efficiency.
Paper :
Image Style Transfer Model Retrieval using Transformer Encoder and Learning to Rank
(外部サイト)