TY - JOUR
T1 - A Simplified Form of Block-Iterative Operator Splitting and an Asynchronous Algorithm Resembling the Multi-Block Alternating Direction Method of Multipliers
AU - Eckstein, Jonathan
N1 - Funding Information: This work was supported in part by National Science Foundation (NSF) Grants 1115638 and 1617617, Computing and Communications Foundations, CISE directorate. The author would also like to thank Patrick Combettes, as this work grew out the same discussions with him that led to the joint work []. Publisher Copyright: © 2017, Springer Science+Business Media New York.
PY - 2017/4/1
Y1 - 2017/4/1
N2 - This paper develops what is essentially a simplified version of the block-iterative operator splitting method already proposed by the author and P. Combettes, but with more general initialization conditions. It then describes one way of implementing this algorithm asynchronously under a computational model inspired by modern high-performance computing environments, which consist of interconnected nodes each having multiple processor cores sharing a common local memory. The asynchronous implementation framework is then applied to derive an asynchronous algorithm which resembles the alternating direction method of multipliers with an arbitrary number of blocks of variables. Unlike earlier proposals for asynchronous variants of the alternating direction method of multipliers, the algorithm relies neither on probabilistic control nor on restrictive assumptions about the problem instance, instead making only standard convex-analytic regularity assumptions. It also allows the proximal parameters to range freely between arbitrary positive bounds, possibly varying with both iterations and subproblems.
AB - This paper develops what is essentially a simplified version of the block-iterative operator splitting method already proposed by the author and P. Combettes, but with more general initialization conditions. It then describes one way of implementing this algorithm asynchronously under a computational model inspired by modern high-performance computing environments, which consist of interconnected nodes each having multiple processor cores sharing a common local memory. The asynchronous implementation framework is then applied to derive an asynchronous algorithm which resembles the alternating direction method of multipliers with an arbitrary number of blocks of variables. Unlike earlier proposals for asynchronous variants of the alternating direction method of multipliers, the algorithm relies neither on probabilistic control nor on restrictive assumptions about the problem instance, instead making only standard convex-analytic regularity assumptions. It also allows the proximal parameters to range freely between arbitrary positive bounds, possibly varying with both iterations and subproblems.
KW - Alternating direction method of multipliers (ADMM)
KW - Asynchronous algorithm
KW - Convex optimization
UR - http://www.scopus.com/inward/record.url?scp=85011841894&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85011841894&partnerID=8YFLogxK
U2 - https://doi.org/10.1007/s10957-017-1074-7
DO - https://doi.org/10.1007/s10957-017-1074-7
M3 - Article
SN - 0022-3239
VL - 173
SP - 155
EP - 182
JO - Journal of Optimization Theory and Applications
JF - Journal of Optimization Theory and Applications
IS - 1
ER -