Arrow Research search
Back to ICLR

ICLR 2020

Decentralized Deep Learning with Arbitrary Communication Compression

Conference Paper Poster Presentations Artificial Intelligence ยท Machine Learning

Abstract

Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
International Conference on Learning Representations
Archive span
2013-2025
Indexed papers
10294
Paper id
666747876906010480