Improving few-shot learning through multi-task representation learning theory - Direction de la recherche technologique Access content directly
Conference Papers Year : 2022

Improving few-shot learning through multi-task representation learning theory

Abstract

In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework. In particular, we highlight a fundamental difference between gradient-based and metric-based algorithms in practice and put forward a theoretical analysis to explain it. Finally, we use the derived insights to improve the performance of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies on few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of MTR theory into practice for the task of few-shot classification.
Embargoed file
Embargoed file
0 3 9
Year Month Jours
Avant la publication

Dates and versions

cea-04041943 , version 1 (22-03-2023)

Identifiers

Cite

Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard. Improving few-shot learning through multi-task representation learning theory. 17th European Conference on Computer Vision – ECCV 2022, Oct 2022, Tel Aviv, Israel. pp.435-452, ⟨10.1007/978-3-031-20044-1_25⟩. ⟨cea-04041943⟩
29 View
5 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More