Private Fine-tuning of Large Language Models with Zeroth-order Optimization

Xinyu Tang, Ashwinee Panda, Milad Nasr, Saeed Mahloujifar, Prateek Mittal

Research output: Contribution to journalArticlepeer-review

Abstract

Differentially private stochastic gradient descent (DP-SGD) allows models to be trained in a privacy-preserving manner, but has proven difficult to scale to the era of foundation models. We introduce DP-ZO, a private fine-tuning method for large language models by privatizing zeroth order optimization methods. A key insight into the design of our method is that the direction of the gradient in the zeroth-order optimization we use is random and the only information from the training data is the step size, i.e., a scalar. Therefore, we only need to privatize the scalar step size, which is memory-efficient. DP-ZO provides a strong privacy-utility trade-off across different tasks, and model sizes that are comparable to DP-SGD in (ε, δ)-DP. Notably, DP-ZO possesses significant advantages over DP-SGD in memory efficiency, and obtains higher utility in pure ε-DP when using the Laplace mechanism.

Original languageAmerican English
Pages (from-to)1-27
Number of pages27
JournalTransactions on Machine Learning Research
Volume2025-January
StatePublished - Jan 2025

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Private Fine-tuning of Large Language Models with Zeroth-order Optimization'. Together they form a unique fingerprint.

Cite this