V. Vapnik, Statistical learning theory, 1998.

P. L. Bartlett and S. Mendelson, Rademacher and gaussian complexities: Risk bounds and structural results, Journal of Machine Learning Research, vol.3, pp.463-482, 2002.

H. Seung, H. Sompolinsky, and N. Tishby, Statistical mechanics of learning from examples, Physical Review A, vol.45, issue.8, p.6056, 1992.

T. L. Watkin, A. Rau, and M. Biehl, The statistical mechanics of learning a rule, Reviews of Modern Physics, vol.65, issue.2, p.499, 1993.

R. Monasson and R. Zecchina, Learning and generalization theories of large committee-machines, Modern Physics Letters B, vol.9, issue.30, pp.1887-1897, 1995.

R. Monasson and R. Zecchina, Weight space structure and internal representations: a direct approach to learning and generalization in multilayer neural networks, Physical review letters, vol.75, issue.12, p.2432, 1995.

A. Engel and C. P. Van-den-broeck, Statistical Mechanics of Learning, 2001.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, Understanding deep learning requires rethinking generalization, 2016.

P. Chaudhari, A. Choromanska, S. Soatto, Y. Lecun, C. Baldassi et al., Entropy-sgd: Biasing gradient descent into wide valleys, 2016.

C. H. Martin and M. W. Mahoney, Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior, 2017.

J. Barbier, F. Krzakala, N. Macris, L. Miolane, and L. Zdeborová, Phase transitions, optimal errors and optimality of message-passing in generalized linear models, 2017.
URL : https://hal.archives-ouvertes.fr/cea-01614258

M. Baity-jesi, L. Sagun, M. Geiger, S. Spigler, G. B. Arous et al., Comparing dynamics: Deep neural networks versus glassy systems, 2018.

M. Mézard, G. Parisi, and M. Virasoro, Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications, vol.9, 1987.

M. Mézard and A. Montanari, Information, physics, and computation, 2009.

D. L. Donoho, A. Maleki, and A. Montanari, Message-passing algorithms for compressed sensing, Proceedings of the National Academy of Sciences, vol.106, issue.45, pp.18914-18919, 2009.

S. Rangan, Generalized approximate message passing for estimation with random linear mixing, Information Theory Proceedings (ISIT), pp.2168-2172, 2011.

M. Bayati and A. Montanari, The dynamics of message passing on dense graphs, with applications to compressed sensing, IEEE Transactions on Information Theory, vol.57, issue.2, pp.764-785, 2011.

A. Javanmard and A. Montanari, State evolution for general approximate message passing algorithms, with applications to spatial coupling. Information and Inference: A, Journal of the IMA, vol.2, issue.2, pp.115-144, 2013.

H. Schwarze, Learning a rule in a multilayer neural network, Journal of Physics A: Mathematical and General, vol.26, issue.21, p.5781, 1993.

H. Schwarze and J. Hertz, Generalization in a large committee machine, Europhysics Letters), vol.20, issue.4, p.375, 1992.

H. Schwarze and J. Hertz, Generalization in fully connected committee machines, Europhysics Letters), vol.21, issue.7, p.785, 1993.

G. Mato and N. Parga, Generalization properties of multilayered neural networks, Journal of Physics A: Mathematical and General, vol.25, issue.19, p.5047, 1992.

D. Saad and S. A. Solla, On-line learning in soft committee machines, Physical Review E, vol.52, issue.4, p.4225, 1995.

J. Barbier and N. Macris, The adaptive interpolation method: A simple scheme to prove replica formulas in bayesian inference, CoRR, 2017.

D. L. Donoho, I. Johnstone, and A. Montanari, Accurate prediction of phase transitions in compressed sensing via a connection to minimax denoising, IEEE transactions on information theory, vol.59, issue.6, pp.3396-3433, 2013.

L. Zdeborová and F. Krzakala, Statistical physics of inference: thresholds and algorithms, Advances in Physics, vol.65, issue.5, pp.453-552, 2016.

Y. Deshpande and A. Montanari, Finding hidden cliques of size \sqrt {N/e} n/e in nearly linear time, Foundations of Computational Mathematics, vol.15, issue.4, pp.1069-1128, 2015.

A. S. Bandeira, A. Perry, and A. S. Wein, Notes on computational-to-statistical gaps: predictions using statistical physics, 2018.

A. E. Alaoui, A. Ramdas, F. Krzakala, L. Zdeborová, and M. I. Jordan, Decoding from pooled data, Sharp information-theoretic bounds, 2016.
URL : https://hal.archives-ouvertes.fr/cea-01553606

A. E. Alaoui, A. Ramdas, F. Krzakala, L. Zdeborová, and M. I. Jordan, Decoding from pooled data: Phase transitions of message passing, 2017 IEEE International Symposium on, pp.2780-2784, 2017.
URL : https://hal.archives-ouvertes.fr/cea-01553606

J. Zhu, D. Baron, and F. Krzakala, Performance limits for noisy multimeasurement vector problems, IEEE Transactions on Signal Processing, vol.65, issue.9, pp.2444-2454, 2017.
DOI : 10.1109/tsp.2016.2646663

URL : https://doi.org/10.1109/tsp.2016.2646663

F. Guerra, Broken replica symmetry bounds in the mean eld spin glass model, Communications in mathematical physics, vol.233, issue.1, pp.1-12, 2003.
DOI : 10.1007/s00220-002-0773-5

URL : http://arxiv.org/pdf/cond-mat/0205123

M. Talagrand, Spin glasses: a challenge for mathematicians: cavity and mean eld models, vol.46, 2003.

D. J. Thouless, P. W. Anderson, and R. G. Palmer, Solution of'solvable model of a spin glass, Philosophical Magazine, vol.35, issue.3, pp.593-601, 1977.

M. Mézard, The space of interactions in neural networks: Gardner's computation with the cavity method, Journal of Physics A: Mathematical and General, vol.22, issue.12, pp.2181-2190, 1989.

M. Opper and O. Winther, Mean eld approach to bayes learning in feed-forward neural networks, Physical review letters, vol.76, issue.11, p.1964, 1996.
DOI : 10.1103/physrevlett.76.1964

Y. Kabashima, Inference from correlated patterns: a uniied theory for perceptron learning and linear vector channels, Journal of Physics: Conference Series, vol.95, issue.1, p.12001, 2008.
DOI : 10.1088/1742-6596/95/1/012001

URL : http://iopscience.iop.org/article/10.1088/1742-6596/95/1/012001/pdf

C. Baldassi, A. Braunstein, N. Brunel, and R. Zecchina, EEcient supervised learning in networks with binary synapses, Proceedings of the National Academy of Sciences, vol.104, issue.26, pp.11079-11084, 2007.
DOI : 10.1186/1471-2202-8-s2-s13

URL : https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-8-S2-S13

B. Aubin, A. Maillard, J. Barbier, F. Krzakala, N. Macris et al., AMP implementation of the committee machine, 2018.

L. Sagun, V. U. Guney, G. B. Arous, and Y. Lecun, Explorations on high dimensional landscapes, 2014.

E. Gardner and B. Derrida, Optimal storage properties of neural network models, Journal of Physics A: Mathematical and general, vol.21, issue.1, p.271, 1988.
DOI : 10.1088/0305-4470/21/1/031

J. Barbier, N. Macris, M. Dia, and F. Krzakala, Mutual information and optimality of approximate message-passing in random linear estimation, 2017.
DOI : 10.1109/allerton.2016.7852290

URL : http://arxiv.org/pdf/1607.02335

M. Opper and W. Kinzel, Statistical mechanics of generalization, Models of neural networks III, pp.151-209, 1996.
DOI : 10.1007/978-1-4612-0723-8_5

J. Barbier and F. Krzakala, Approximate message-passing decoder and capacity achieving sparse superposition codes, IEEE Transactions on Information Theory, vol.63, pp.4894-4927, 2017.
DOI : 10.1109/tit.2017.2713833

URL : http://arxiv.org/pdf/1503.08040

M. J. Wainwright and M. I. Jordan, Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, vol.1, pp.1-305, 2008.
DOI : 10.1561/2200000001

URL : http://www.eecs.berkeley.edu/~wainwrig/Papers/WaiJor08_FTML.pdf

M. Bayati, M. Lelarge, and A. Montanari, Universality in polytope phase transitions and message passing algorithms, The Annals of Applied Probability, vol.25, issue.2, pp.753-822, 2015.
DOI : 10.1214/14-aap1010

URL : https://hal.archives-ouvertes.fr/hal-01254901