Skip to Main content Skip to Navigation
New interface
Conference papers

Aggregating image and text quantized correlated components

Abstract : Cross-modal tasks occur naturally for multimedia content that can be described along two or more modalities like visual content and text. Such tasks require to "translate" information from one modality to another. Methods like kernelized canonical correlation analysis (KCCA) attempt to solve such tasks by finding aligned subspaces in the description spaces of different modalities. Since they favor correlations against modality-specific information, these methods have shown some success in both cross-modal and bi-modal tasks. However, we show that a direct use of the subspace alignment obtained by KCCA only leads to coarse translation abilities. To address this problem, we first put forward a new representation method that aggregates information provided by the projections of both modalities on their aligned subspaces. We further suggest a method relying on neighborhoods in these subspaces to complete uni-modal information. Our proposal exhibits state-of-the-art results for bi-modal classification on Pascal VOC07 and improves it by over 60% for cross-modal retrieval on FlickR 8K/30K.
Document type :
Conference papers
Complete list of metadata

Cited literature [30 references]  Display  Hide  Download
Contributor : Léna Le Roy Connect in order to contact the contributor
Submitted on : Friday, January 10, 2020 - 4:25:38 PM
Last modification on : Wednesday, September 28, 2022 - 5:59:30 AM


Publisher files allowed on an open archive


Distributed under a Creative Commons Attribution 4.0 International License



Thi Quynh Nhi Tran, Hervé Le Borgne, M. Crucianu. Aggregating image and text quantized correlated components. 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, 2016, Las Vegas, United States. pp.2046-2054, ⟨10.1109/CVPR.2016.225⟩. ⟨cea-01843176⟩



Record views


Files downloads