TY - GEN
T1 - Distributed Gradient Tracking Methods with Guarantees for Computing a Solution to Stochastic MPECs
AU - Ebrahimi, Mohammadjavad
AU - Shanbhag, Uday V.
AU - Yousefian, Farzad
N1 - Publisher Copyright: © 2024 AACC.
PY - 2024
Y1 - 2024
N2 - We consider a class of hierarchical multi-agent optimization problems over networks where agents seek to compute an approximate solution to a single-stage stochastic mathematical program with equilibrium constraints (MPEC). MPECs subsume several important problem classes including Stackelberg games, bilevel programs, and traffic equilibrium problems, to name a few. Our goal in this work is to provably resolve stochastic MPECs in distributed regimes where the agents only have access to their local objectives and an inexact best-response to the lower-level equilibrium problem. To this end, we devise a new method called randomized smoothed distributed zeroth-order gradient tracking (rs-DZGT). This is a novel gradient tracking scheme where agents employ a zeroth-order implicit scheme to approximate their (unavailable) local gradients. Leveraging the properties of a randomized smoothing technique, we establish the convergence of the method and derive complexity guarantees for computing a stationary point of an optimization problem with a smoothed implicit global objective. We also provide preliminary numerical experiments where we compare the performance of rs-DZGT on networks under different settings with that of its centralized counterpart.
AB - We consider a class of hierarchical multi-agent optimization problems over networks where agents seek to compute an approximate solution to a single-stage stochastic mathematical program with equilibrium constraints (MPEC). MPECs subsume several important problem classes including Stackelberg games, bilevel programs, and traffic equilibrium problems, to name a few. Our goal in this work is to provably resolve stochastic MPECs in distributed regimes where the agents only have access to their local objectives and an inexact best-response to the lower-level equilibrium problem. To this end, we devise a new method called randomized smoothed distributed zeroth-order gradient tracking (rs-DZGT). This is a novel gradient tracking scheme where agents employ a zeroth-order implicit scheme to approximate their (unavailable) local gradients. Leveraging the properties of a randomized smoothing technique, we establish the convergence of the method and derive complexity guarantees for computing a stationary point of an optimization problem with a smoothed implicit global objective. We also provide preliminary numerical experiments where we compare the performance of rs-DZGT on networks under different settings with that of its centralized counterpart.
UR - https://www.scopus.com/pages/publications/85204457101
UR - https://www.scopus.com/pages/publications/85204457101#tab=citedBy
U2 - 10.23919/ACC60939.2024.10644388
DO - 10.23919/ACC60939.2024.10644388
M3 - Conference contribution
T3 - Proceedings of the American Control Conference
SP - 2182
EP - 2187
BT - 2024 American Control Conference, ACC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 American Control Conference, ACC 2024
Y2 - 10 July 2024 through 12 July 2024
ER -