Leveraging known data for missing label prediction in cultural heritage context
Cultural heritage represents a reliable medium for history and knowledge transfer. Cultural heritage assets are often exhibited in museums and heritage sites all over the world. However, many assets are poorly labeled, which decreases their historical value. If an asset's history is lost, its historical value is also lost. The classification and annotation of overlooked or incomplete cultural assets increase their historical value and allows the discovery of various types of historical links. In this paper, we tackle the challenge of automatically classifying and annotating cultural heritage assets using their visual features as well as the metadata available at hand. Traditional approaches mainly rely only on image data and machine-learning-based techniques to predict missing labels. Often, visual data are not the only information available at hand. In this paper, we present a novel multimodal classification approach for cultural heritage assets that relies on a multitask neural network where a convolutional neural network (CNN) is designed for visual feature learning and a regular neural network is used for textual feature learning. These networks are merged and trained using a shared loss. The combined networks rely on both image and textual features to achieve better asset classification. Initial tests related to painting assets showed that our approach performs better than traditional CNNs that only rely on images as input.
- Computer Science & Engineering [470 items ]