Abstract
This article gives a new proof that fully connected neural networks with random weights and biases converge to Gaussian processes in the regime where the input dimension, output dimension, and depth are kept fixed, while the hidden layer widths tend to infinity. Unlike prior work, convergence is shown assuming only moment conditions for the distribution of weights and for quite general nonlinearities.
Original language | American English |
---|---|
Pages (from-to) | 4798-4819 |
Number of pages | 22 |
Journal | Annals of Applied Probability |
Volume | 33 |
Issue number | 6A |
DOIs | |
State | Published - Dec 2023 |
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty
Keywords
- Gaussian processes
- Neural networks
- limit theorems