Skip to Main content Skip to Navigation
Journal articles

Harnessing noisy Web images for deep representation

Abstract : The keep-growing content of Web images is probably the next important data source to scale up deep neural networks which recently surpass human in image classification tasks. The fact that deep networks are hungry for labelled data limits themselves from extracting valuable information of Web images which are abundant and cheap. There have been efforts to train neural networks such as autoencoders with respect to either unsupervised or semi-supervised settings. Nonetheless they are less performant than supervised methods partly because the loss function used in unsupervised methods, for instance Euclidean loss, failed to guide the network to learn discriminative features and ignore unnecessary details. We instead train convolutional networks in a supervised setting but use weakly labelled data which are large amounts of unannotated Web images downloaded from Flickr and Bing. Our experiments are conducted at several data scales, with different choices of network architecture, and alternating between different data preprocessing techniques. The effectiveness of our approach is shown by the good generalization of the learned representations with new six public datasets.
Complete list of metadatas

https://hal-cea.archives-ouvertes.fr/cea-01756775
Contributor : Léna Le Roy <>
Submitted on : Tuesday, April 3, 2018 - 8:45:16 AM
Last modification on : Monday, February 10, 2020 - 6:13:48 PM

Identifiers

Collections

Citation

Phong Vo, Alexandru Ginsca, Hervé Le Borgne, Adrian Popescu. Harnessing noisy Web images for deep representation. Computer Vision and Image Understanding, Elsevier, 2017, 164, pp.68 - 81. ⟨10.1016/j.cviu.2017.01.009⟩. ⟨cea-01756775⟩

Share

Metrics

Record views

174