ワークショップ (国際) Constructing Image–Text Pair Dataset from Books
Yamato Okamoto (Naver Cloud Corp. / Works Mobile Japan Corp.), Haruto Toyonaga (Doshisha Univ.), Yoshihisa Ijiri, Hirokatsu Kataoka
International Conference on Computer Vision workshop on "Towards the Next Generation of Computer Vision Datasets" (ICCVW Datacomp)
Digital archiving is becoming widespread owing to its effectiveness in protecting valuable books and providing knowledge to many people electronically. In this paper, we propose a novel approach to leverage digital archives for machine learning. If we can fully utilize such digitized data, machine learning has the potential to uncover unknown insights and ultimately acquire knowledge autonomously, just like humans read books. As a first step, we design a dataset construction pipeline comprising an optical character reader (OCR), an object detector, and a layout analyzer for the autonomous extraction of image–text pairs. In our experiments, we apply our pipeline on old photo books to construct an image–text pair dataset, showing its effectiveness in image–text retrieval and insight extraction.