Arrow Research search

Author name cluster

XINYU DING

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

IJCAI Conference 2025 Conference Paper

Block Circulant Adapter for Large Language Models

  • XINYU DING
  • Meiqi Wang
  • Siyu Liao
  • Zhongfeng Wang

Fine-tuning large language models (LLMs) is difficult due to their huge model size. Recent Fourier domain-based methods show potential for reducing fine-tuning costs. We propose a block circulant matrix-based fine-tuning method with a stable training heuristic to leverage the properties of circulant matrices and one-dimensional Fourier transforms to reduce storage and computation costs. Experiments show that our method uses 14× less number of parameters than VeRA, 16× smaller than LoRA and 32× less FLOPs than FourierFT, while maintaining close or better task performance. Our approach presents a promising way in frequency domain to fine-tune large models on downstream tasks.

NeurIPS Conference 2025 Conference Paper

Memory-Efficient Training with In-Place FFT Implementation

  • XINYU DING
  • Bangtian Liu
  • Siyu Liao
  • Zhongfeng Wang

Fast Fourier Transforms (FFT) are widely used to reduce memory and computational costs in deep learning. However, existing implementations, including standard FFT and real FFT (rFFT), cannot achieve true in-place computation. In particular, rFFT maps an input of size $n$ to a complex output of size $\frac{n}{2}+1$, causing dimensional mismatch and requiring additional memory allocation. We propose the first real-domain, fully in-place FFT framework (rdFFT) that preserves input-output dimensional consistency ($n \rightarrow n$). By leveraging butterfly operation symmetry and conjugate properties in the frequency domain, we design an implicit complex encoding scheme that eliminates intermediate cache usage entirely. Theoretically, our method reduces memory usage by 50\% compared to rFFTs. Moreover, it enables zero-cache parameter updates by utilizing the derivative property of the Fourier transform to compute matrix inverses efficiently without intermediate storage. Experiments on multiple natural language understanding tasks demonstrate the method’s effectiveness in maintaining model performance while significantly lowering memory overhead, offering a promising direction for frequency-domain lightweight adaptation.

IJCAI Conference 2025 Conference Paper

Parameter-Efficient Fine-Tuning with Circulant and Diagonal Vectors

  • XINYU DING
  • Lexuan Chen
  • Siyu Liao
  • Zhongfeng Wang

Foundation models have achieved tremendous success in different domains. However, their huge computation and storage complexity make these models difficult to fine-tune and also less applicable in practice. Recent study shows training in Fourier domain can be an effective fine-tuning method in terms of both model performance and number of training parameters. In this work, we propose to further reduce the complexity by the factorization through the product of interleaved circulant and diagonal matrices. In addition, we address the case of non-square fine-tuning weights by partitioning the circulant matrix into blocks. Our method avoids the construction of weight change matrix and utilizes 1D fast Fourier transform (FFT) instead of 2D FFT. Experimental results show that our method achieves similar or better performance across various tasks with much less floating-point operations (FLOPs) and the number of trainable parameters.