AAMAS Conference 2025 Conference Paper
Trading-off Accuracy and Communication Cost in Federated Learning
- Mattia Jacopo Villani
- Emanuele Natale
- Frederik Mallmann-Trenn
Leveraging the training-by-pruning paradigm introduced by Zhou et al. [NeurIPS’19], Isik et al. [ICLR’23] introduced a federated learning protocol that achieves a 34-fold reduction in communication cost. We achieve a compression improvements of orders of orders of magnitude over the state-of-the-art. The central idea of our framework is to encode the network weights ® 𝑤 by a the vector of trainable parameters ® 𝑝, such that ® 𝑤 = 𝑄 · ® 𝑝 where 𝑄 is a carefully-generate sparse random matrix (that remains fixed throughout training). In such framework, the previous work of Zhou et al. [NeurIPS’19] is retrieved when 𝑄 is diagonal and ® 𝑝 has the same dimension of ® 𝑤. We instead show that ® 𝑝 can effectively be chosen much smaller than ® 𝑤, while retaining the same accuracy at the price of a decrease of the sparsity of 𝑄. Since server and clients only need to share ® 𝑝, such a trade-off leads to a substantial improvement in communication cost. Moreover, we provide theoretical insight into our framework and establish a novel link between training-by-sampling and random convex geometry.