Campus Units

Electrical and Computer Engineering, Mathematics

Document Type


Publication Version

Submitted Manuscript

Publication Date


Journal or Book Title



Many big data algorithms executed on MapReduce-like systems have a shuffle phase that often dominates the overall job execution time. Recent work has demonstrated schemes where the communication load in the shuffle phase can be traded off for the computation load in the map phase. In this work, we focus on a class of distributed algorithms, broadly used in deep learning, where intermediate computations of the same task can be combined. Even though prior techniques reduce the communication load significantly, they require a number of jobs that grows exponentially in the system parameters. This limitation is crucial and may diminish the load gains as the algorithm scales. We propose a new scheme which achieves the same load as the state-of-the-art while ensuring that the number of jobs as well as the number of subfiles that the data set needs to be split into remain small.


This is a pre-print of the article Konstantinidis, Konstantinos and Aditya Ramamoorthy. "CAMR: Coded Aggregated MapReduce." arXiv preprint arXiv:1901.07418 (2019). Posted with permission.

Copyright Owner

The Authors



File Format


Published Version