Arrow Research search
Back to AAAI

AAAI 2025

TinyFoA: Memory Efficient Forward-Only Algorithm for On-Device Learning

Conference Paper AAAI Technical Track on Machine Learning II Artificial Intelligence

Abstract

Forward-only algorithms offer a promising memory-efficient alternative to Backpropagation (BP) for on-device learning. However, state-of-the-art forward-only algorithms, e.g., Forward-Forward (FF), still require a substantial amount of memory during the training process, often exceeding the limits of mobile edge and Internet of Things (IoT) devices. At the same time, existing memory-optimization techniques, e.g., binarizing parameters and activations, are mainly designed for BP, hence significantly degrading the classification performance when applied to state-of-the-art forward-only algorithms. In this paper, we propose a memory-efficient forward-only algorithm called TinyFoA, to reduce dynamic memory overhead in the training process. Our TinyFoA optimizes the memory efficiency not only by layer-wise training but also by partially updating each layer, as well as by binarizing the weights and the activations. We extensively evaluate our proposed TinyFoA against BP and other forward-only algorithms and demonstrate its effectiveness and superiority compared to state-of-the-art forward-only algorithms in terms of classification performance and training memory overhead, reducing the memory overheads by an order of magnitude.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
608847999822969560