Document Type

Article

Publication Version

Submitted Manuscript

Publication Date

2015

Journal or Book Title

IEEE Transactions on Signal Processing

Volume

63

Issue

2

First Page

482

Last Page

497

DOI

10.1109/TSP.2014.2367458

Abstract

Multi-agent distributed consensus optimization problems arise in many signal processing applications. Recently, the alternating direction method of multipliers (ADMM) has been used for solving this family of problems. ADMM based distributed optimization method is shown to have faster convergence rate compared with classic methods based on consensus subgradient, but can be computationally expensive, especially for problems with complicated structures or large dimensions. In this paper, we propose lowcomplexity algorithms that can reduce the overall computational cost of consensus ADMM by an order of magnitude for certain large-scale problems. Central to the proposed algorithms is the use of an inexact step for each ADMM update, which enables the agents to perform cheap computation at each iteration. Our convergence analyses show that the proposed methods converge well under some convexity assumptions. Numerical results show that the proposed algorithms offer considerably lower computational complexity than the standard ADMM based distributed optimization methods.

Comments

This is a manuscript of an article from IEEE Transactions on Signal Processing 63 (2015):482, doi: 10.1109/TSP.2014.2367458. Posted with permission.

Copyright Owner

IEEE

Language

en

File Format

application/pdf

Published Version

Share

COinS