Approximate message-passing for convex optimization with non-separable penalties

Abstract : We introduce an iterative optimization scheme for convex objectives consisting of a linear loss and a non-separable penalty, based on the expectation-consistent approximation and the vector approximate message-passing (VAMP) algorithm. Specifically, the penalties we approach are convex on a linear transformation of the variable to be determined, a notable example being total variation (TV). We describe the connection between message-passing algorithms-typically used for approximate inference-and proximal methods for optimization, and show that our scheme is, as VAMP, similar in nature to the Peaceman-Rachford splitting, with the important difference that stepsizes are set adaptively. Finally, we benchmark the performance of our VAMP-like iteration in problems where TV penalties are useful, namely classification in task fMRI and reconstruction in tomography, and show faster convergence than that of state-of-the-art approaches such as FISTA and ADMM in most settings.
Type de document :
Pré-publication, Document de travail
t18/141. 2018
Liste complète des métadonnées

https://hal-cea.archives-ouvertes.fr/cea-01932983
Contributeur : Emmanuelle De Laborderie <>
Soumis le : vendredi 23 novembre 2018 - 14:35:06
Dernière modification le : samedi 16 mars 2019 - 01:52:49

Fichier

approximate_message_passing.pd...
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : cea-01932983, version 1
  • ARXIV : 1809.06304

Citation

Andre Manoel, Florent Krzakala, Bertrand Thirion, Gaël Varoquaux, Lenka Zdeborová. Approximate message-passing for convex optimization with non-separable penalties. t18/141. 2018. 〈cea-01932983〉

Partager

Métriques

Consultations de la notice

113

Téléchargements de fichiers

49