Continual Learning with Differential Privacy

Pradnya Desai, Phung Lai, Nhat Hai Phan, My T. Thai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks. We first introduce a notion of continual adjacent databases to bound the sensitivity of any data record participating in the training process of CL. Based upon that, we develop a new DP-preserving algorithm for CL with a data sampling strategy to quantify the privacy risk of training data in the well-known Averaged Gradient Episodic Memory (A-GEM) approach by applying a moments accountant. Our algorithm provides formal guarantees of privacy for data records across tasks in CL. Preliminary theoretical analysis and evaluations show that our mechanism tightens the privacy loss while maintaining a promising model utility.

Original languageAmerican English
Title of host publicationNeural Information Processing - 28th International Conference, ICONIP 2021, Proceedings
EditorsTeddy Mantoro, Minho Lee, Media Anugerah Ayu, Kok Wai Wong, Achmad Nizar Hidayanto
PublisherSpringer Science and Business Media Deutschland GmbH
Pages334-343
Number of pages10
ISBN (Print)9783030923099
DOIs
StatePublished - 2021
Event28th International Conference on Neural Information Processing, ICONIP 2021 - Virtual, Online
Duration: Dec 8 2021Dec 12 2021

Publication series

NameCommunications in Computer and Information Science
Volume1517 CCIS

Conference

Conference28th International Conference on Neural Information Processing, ICONIP 2021
CityVirtual, Online
Period12/8/2112/12/21

ASJC Scopus subject areas

  • General Computer Science
  • General Mathematics

Keywords

  • Continual learning
  • Deep learning
  • Differential privacy

Cite this