Arrow Research search
Back to TMLR

TMLR 2026

A Unifying Framework for Parallelizing Sequential Models with Linear Dynamical Systems

Journal Article Articles Artificial Intelligence ยท Machine Learning

Abstract

Harnessing parallelism in seemingly sequential models is a central challenge for modern machine learning. Several approaches have been proposed for evaluating sequential processes in parallel using iterative fixed-point methods, like Newton, Picard, and Jacobi iterations. In this work, we show that these methods can be understood within a common framework based on linear dynamical systems (LDSs), where different iteration schemes arise naturally as approximate linearizations of a nonlinear recursion. Moreover, we theoretically analyze the rates of convergence of these methods, and we verify the predictions of this theory with several case studies. This unifying framework highlights shared principles behind these techniques and clarifies when particular fixed-point methods are most likely to be effective. By bridging diverse algorithms through the language of LDSs, the framework provides a clearer theoretical foundation for parallelizing sequential models and points toward new opportunities for efficient and scalable computation.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Transactions on Machine Learning Research
Archive span
2022-2026
Indexed papers
3849
Paper id
1003362128107035148