Continuation of Nesterov’s Smoothing for Regression with Structured Sparsity in High-Dimensional Neuroimaging - CEA - Commissariat à l’énergie atomique et aux énergies alternatives Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Medical Imaging Année : 2018

Continuation of Nesterov’s Smoothing for Regression with Structured Sparsity in High-Dimensional Neuroimaging

Résumé

redictive models can be used on high-dimensional brain images to decode cognitive states or diagnosis/prognosis of a clinical condition/evolution. Spatial regularization through structured sparsity offers new perspectives in this context and reduces the risk of overfitting the model while providing in- terpretable neuroimaging signatures by forcing the solution to adhere to domain-specific constraints. Total Variation (TV) is a promising candidate for structured penalization: it enforces spatial smoothness of the solution while segmenting predictive regions from the background. We consider the problem of minimizing the sum of a smooth convex loss, a non-smooth convex penalty (whose proximal operator is known) and a wide range of possible complex, non-smooth convex structured penalties such as TV or overlapping group Lasso. Existing solvers are either limited in the functions they can minimize or in their practical capacity to scale to high-dimensional imaging data. Nesterov’s smoothing technique can be used to minimize a large number of non- smooth convex structured penalties. However, reasonable preci- sion requires a small smoothing parameter, which slows down the convergence speed to unacceptable levels. To benefit from the versatility of Nesterov’s smoothing technique, we propose a first order continuation algorithm, CONESTA, which automatically generates a sequence of decreasing smoothing parameters. The generated sequence maintains the optimal convergence speed towards any globally desired precision. Our main contributions are: To propose an expression of the duality gap to probe the current distance to the global optimum in order to adapt the smoothing parameter and the convergence speed. This expression is applicable to many penalties and can be used with other solvers than CONESTA. We also propose an expression for the particular smoothing parameter that minimizes the number of iterations required to reach a given precision. Further, we provide a convergence proof and its rate, which is an improvement over classical proximal gradient smoothing methods. We demonstrate on both simulated and high-dimensional structural neuroimaging data that CONESTA significantly outperforms many state-of-the- art solvers in regard to convergence speed and precision.
Fichier principal
Vignette du fichier
HadjSelem-18_preprint-ieeetmi_conesta.pdf (1.74 Mo) Télécharger le fichier
HadjSelem-18_ieeetmi_conesta_supp.pdf (851.3 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

cea-01883286 , version 1 (28-09-2018)

Identifiants

Citer

Fouad Hadj-Selem, Tommy Lofstedt, Elvis Dohmatob, Vincent Frouin, Mathieu Dubois, et al.. Continuation of Nesterov’s Smoothing for Regression with Structured Sparsity in High-Dimensional Neuroimaging. IEEE Transactions on Medical Imaging, inPress, 2018, ⟨10.1109/TMI.2018.2829802⟩. ⟨cea-01883286⟩
248 Consultations
399 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More