Improving few-shot learning through multi-task representation learning theory - CEA - Commissariat à l’énergie atomique et aux énergies alternatives Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Improving few-shot learning through multi-task representation learning theory

Résumé

In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework. In particular, we highlight a fundamental difference between gradient-based and metric-based algorithms in practice and put forward a theoretical analysis to explain it. Finally, we use the derived insights to improve the performance of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies on few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of MTR theory into practice for the task of few-shot classification.
Fichier principal
Vignette du fichier
ECCV22_Bouniot_with_annex_NoteSpringer.pdf (2.08 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

cea-04041943 , version 1 (22-03-2023)

Identifiants

Citer

Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard. Improving few-shot learning through multi-task representation learning theory. 17th European Conference on Computer Vision – ECCV 2022, Oct 2022, Tel Aviv, Israel. pp.435-452, ⟨10.1007/978-3-031-20044-1_25⟩. ⟨cea-04041943⟩
60 Consultations
17 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More