A near-optimal algorithm for differentially-private principal components

Kamalika Chaudhuri, Anand D. Sarwate, Kaushik Sinha

Research output: Contribution to journalArticlepeer-review

66 Scopus citations

Abstract

The principal components analysis (PCA) algorithm is a standard tool for identifying good lowdimensional approximations to high-dimensional data. Many data sets of interest contain private or sensitive information about individuals. Algorithms which operate on such data should be sensitive to the privacy risks in publishing their outputs. Differential privacy is a framework for developing tradeoffs between privacy and the utility of these outputs. In this paper we investigate the theory and empirical performance of differentially private approximations to PCA and propose a new method which explicitly optimizes the utility of the output. We show that the sample complexity of the proposed method differs from the existing procedure in the scaling with the data dimension, and that our method is nearly optimal in terms of this scaling. We furthermore illustrate our results, showing that on real data there is a large performance gap between the existing method and our method.

Original languageAmerican English
Pages (from-to)2905-2943
Number of pages39
JournalJournal of Machine Learning Research
Volume14
StatePublished - Sep 2013
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • Differential privacy
  • Dimension reduction
  • Principal components analysis

Fingerprint

Dive into the research topics of 'A near-optimal algorithm for differentially-private principal components'. Together they form a unique fingerprint.

Cite this