Supplement to " An Iterative Smoothing Algorithm for Regression with Structured Sparsity " - CEA - Commissariat à l’énergie atomique et aux énergies alternatives Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2016

Supplement to " An Iterative Smoothing Algorithm for Regression with Structured Sparsity "

Résumé

High-dimensional regression or classification models are increasingly used to analyze biological data such as neuroimaging of genetic data sets. However, classical penalized algorithms produce dense solutions that are difficult to interpret without arbitrary thresholding. Alternatives based on sparsity-inducing penalties suffer from coefficient instability. Complex structured sparsity-inducing penalties are a promising approach to force the solution to adhere to some domain-specific constraints and thus offering new perspectives in biomarker identification. We propose a generic optimization framework that can combine any smooth convex loss function with: (i) penalties whose proximal operator is known and (ii) with a large range of complex, non-smooth convex structured penalties such as total variation, or overlapping group lasso. Although many papers have addressed a similar goal, few have tackled it in such a generic way and in the context of high-dimensional data (> 10^5 features). The proposed continuation algorithm, called CONESTA, dynamically smooths the complex penalties to avoid the computation of proximal operators, that are either not known or expensive to compute. The decreasing sequence of smoothing parameters is dynamically adapted, using the duality gap, in order to maintain the optimal convergence speed towards any globally desired precision. First, we demonstrate that CONESTA achieves an improved convergence rate than classical (without continuation) proximal gradient smoothing. Second, experiments conducted on both simulated and experimental MRI data, exhibit that CONESTA outperforms the excessive gap method, ADMM, classical proximal gradient smoothing and inexact FISTA in terms of convergence speed and/or precision of the solution. Third, on the experimental MRI data set, we establish the superiority of structured sparsity-inducing penalties (l1 and total variation) over non-structured methods in terms of the recovery of meaningful and stable groups of predictive variables.
Fichier principal
Vignette du fichier
ols_nestv_supp.pdf (449.6 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

cea-01324021 , version 1 (31-05-2016)
cea-01324021 , version 2 (05-10-2016)
cea-01324021 , version 3 (21-11-2016)
cea-01324021 , version 4 (22-04-2018)

Identifiants

  • HAL Id : cea-01324021 , version 2

Citer

Fouad Hadj-Selem, Tommy Löfstedt, Vincent Frouin, Vincent Guillemot, Edouard Duchesnay. Supplement to " An Iterative Smoothing Algorithm for Regression with Structured Sparsity ". 2016. ⟨cea-01324021v2⟩
882 Consultations
535 Téléchargements

Partager

Gmail Facebook X LinkedIn More