Non-vacuous generalization bounds at the Im-AGeNet scale: A Pac-Bayesian compression approach

Research output: Contribution to conferencePaper

Abstract

Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be “compressed” to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size that, combined with off-the-shelf compression algorithms, leads to state-of-the-art generalization guarantees. In particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. Additionally, we show that compressibility of models that tend to overfit is limited. Empirical results show that an increase in overfitting increases the number of bits required to describe a trained network.

Original languageEnglish (US)
StatePublished - Jan 1 2019
Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
Duration: May 6 2019May 9 2019

Conference

Conference7th International Conference on Learning Representations, ICLR 2019
CountryUnited States
CityNew Orleans
Period5/6/195/9/19

All Science Journal Classification (ASJC) codes

  • Education
  • Language and Linguistics
  • Computer Science Applications
  • Linguistics and Language

Cite this

Adams, R. P. (2019). Non-vacuous generalization bounds at the Im-AGeNet scale: A Pac-Bayesian compression approach. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.