Maximal Initial Learning Rates in Deep ReLU Networks

Gaurav Iyer, Boris Hanin, David Rolnick

Research output: Contribution to journalConference articlepeer-review

Abstract

Training a neural network requires choosing a suitable learning rate, which involves a trade-off between speed and effectiveness of convergence. While there has been considerable theoretical and empirical analysis of how large the learning rate can be, most prior work focuses only on late-stage training. In this work, we introduce the maximal initial learning rate η - the largest learning rate at which a randomly initialized neural network can successfully begin training and achieve (at least) a given threshold accuracy. Using a simple approach to estimate η, we observe that in constant-width fully-connected ReLU networks, η behaves differently from the maximum learning rate later in training. Specifically, we find that η is well predicted as a power of (depth ×width), provided that (i) the width of the network is sufficiently large compared to the depth, and (ii) the input layer is trained at a relatively small learning rate. We further analyze the relationship between η and the sharpness λ1 of the network at initialization, indicating they are closely though not inversely related. We formally prove bounds for λ1 in terms of (depth × width) that align with our empirical results.

Original languageAmerican English
Pages (from-to)14500-14530
Number of pages31
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: Jul 23 2023Jul 29 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Maximal Initial Learning Rates in Deep ReLU Networks'. Together they form a unique fingerprint.

Cite this