Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Modelling the influence of data structure on learning in neural networks: the hidden manifold model

Abstract : The lack of crisp mathematical models that capture the structure of real-world data sets is a major obstacle to the detailed theoretical understanding of deep neural networks. Here, we introduce a generative model for data sets that we call the hidden manifold model (HMM). The idea is to have high-dimensional inputs lie on a lower-dimensional manifold, with labels that depend only on their position within this manifold, akin to a single layer decoder or generator in a generative adversarial network. We first demonstrate the effect of structured data sets by experimentally comparing the dynamics and the performance of two-layer neural networks trained on three different data sets: (i) an unstructured synthetic data set containing random i.i.d. inputs, (ii) a structured data set drawn from the HMM and (iii) a simple canonical data set containing MNIST images. We pinpoint two phenomena related to the dynamics of the networks and their ability to generalise that only appear when training on structured data sets, and we experimentally demonstrate that training networks on data sets drawn from the HMM reproduces both the phenomena seen during training on real dataset. Our main theoretical result is that we show that the learning dynamics in the hidden manifold model is amenable to an analytical treatment by proving a "Gaussian Equivalence Theorem", opening the way to further detailed theoretical studies. In particular, we show how the dynamics of stochastic gradient descent for a two-layer network is captured by a set of ordinary differential equations that track the generalisation error at all times.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

Cited literature [71 references]  Display  Hide  Download

https://hal-cea.archives-ouvertes.fr/cea-02529246
Contributor : Emmanuelle de Laborderie <>
Submitted on : Thursday, April 2, 2020 - 11:09:28 AM
Last modification on : Tuesday, September 22, 2020 - 3:46:47 AM

File

1909.11500.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : cea-02529246, version 1
  • ARXIV : 1909.11500

Citation

Sebastian Goldt, Marc Mezard, Florent Krzakala, Lenka Zdeborová. Modelling the influence of data structure on learning in neural networks: the hidden manifold model. 2020. ⟨cea-02529246⟩

Share

Metrics

Record views

143

Files downloads

147