Arrow Research search
Back to TIST

TIST 2021

Quantized Adam with Error Feedback

Journal Article journal-article Artificial Intelligence ยท Intelligent Systems

Abstract

In this article, we present a distributed variant of an adaptive stochastic gradient method for training deep neural networks in the parameter-server model. To reduce the communication cost among the workers and server, we incorporate two types of quantization schemes, i.e., gradient quantization and weight quantization, into the proposed distributed Adam. In addition, to reduce the bias introduced by quantization operations, we propose an error-feedback technique to compensate for the quantized gradient. Theoretically, in the stochastic nonconvex setting, we show that the distributed adaptive gradient method with gradient quantization and error feedback converges to the first-order stationary point, and that the distributed adaptive gradient method with weight quantization and error feedback converges to the point related to the quantized level under both the single-worker and multi-worker modes. Last, we apply the proposed distributed adaptive gradient methods to train deep neural networks. Experimental results demonstrate the efficacy of our methods.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
ACM Transactions on Intelligent Systems and Technology
Archive span
2010-2026
Indexed papers
1415
Paper id
845099117941989743