Degree Type


Date of Award


Degree Name

Doctor of Philosophy


Mechanical Engineering


Mechanical Engineering

First Advisor

Soumik Sarkar


Large scale multi-agent networked systems are becoming increasingly popular in industry and academia as they can be applied to represent systems in diverse application areas, such as intelligent surveillance and reconnaissance, mobile robotics, transportation networks and complex buildings. In such systems, issues related to control and learning have been significant technical challenges to

affect system performance and overall cost. While centralized optimization approaches have been widely used by the engineering and computer science community, advanced and effective distributed optimization techniques have not been explored sufficiently and thoroughly in this regard. This study explores various categories of centralized and distributed optimization methods that have

been applied or may be applicable for diverse engineering and science problems. The performance of centralized or distributed optimization schemes significantly depends on various factors including the types of objective functions, constraints, step sizes, and communication networks, etc. In this context, the focus of this dissertation is towards developing novel distributed optimization

algorithms in order to solve challenging control and learning problems in various domains such as large-scale building energy systems and robotic networks. Specifically, we develop a generalized gossip-based subgradient method for solving distributed optimization problems in large-scale networked systems, e.g., larger-scale commercial building energy systems. Different from previous

work, a user-defined control parameter is introduced to control a spectrum from globally optimal solution to suboptimal solutions and the trade-off between the solution accuracy and temporal convergence. We test and validate our proposed algorithm on a real testbed involving multiple zones incorporating a distributed control and sensing platform. In addition, we extend the distributed

optimization to the deep learning area for solving an emerging topic, i.e., distributed deep learning, in fixed topology networks. While some previous work exists on this topic, the data parallelism and distributed computation are still not sufficiently explored. Therefore, we propose a class of distributed deep learning methods to tackle such issues by combining the consensus protocol and stochastic gradient descent approach. Moreover, to address the consensus-optimality trade-offs in distributed convex and nonconvex optimization, especially in deep learning when the training datasets for agents are non-balanced (non-iid), we propose and develop new approaches

in this research, namely, incremental consensus-based distributed stochastic gradient descent and generalized consensus-based distributed (stochastic) gradient descent approach.

Copyright Owner

Zhanhong Jiang



File Format


File Size

162 pages