Arrow Research search

Author name cluster

Aili Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

NeurIPS Conference 2025 Conference Paper

Enhanced Self-Distillation Framework for Efficient Spiking Neural Network Training

  • Xiaochen Zhao
  • Chengting Yu
  • Kairong Yu
  • Lei Liu
  • Aili Wang

Spiking Neural Networks (SNNs) exhibit exceptional energy efficiency on neuromorphic hardware due to their sparse activation patterns. However, conventional training methods based on surrogate gradients and Backpropagation Through Time (BPTT) not only lag behind Artificial Neural Networks (ANNs) in performance, but also incur significant computational and memory overheads that grow linearly with the temporal dimension. To enable high-performance SNN training under limited computational resources, we propose an enhanced self-distillation framework, jointly optimized with rate-based backpropagation. Specifically, the firing rates of intermediate SNN layers are projected onto lightweight ANN branches, and high-quality knowledge generated by the model itself is used to optimize substructures through the ANN pathways. Unlike traditional self-distillation paradigms, we observe that low-quality self-generated knowledge may hinder convergence. To address this, we decouple the teacher signal into reliable and unreliable components, ensuring that only reliable knowledge is used to guide the optimization of the model. Extensive experiments on CIFAR-10, CIFAR-100, CIFAR10-DVS, and ImageNet demonstrate that our method reduces training complexity while achieving high-performance SNN training. Our code is available at https: //github. com/Intelli-Chip-Lab/enhanced-self-distillation-framework-for-snn.

ECAI Conference 2025 Conference Paper

Enhancing Learning of Spiking Neural Networks Through Normalization with Time-Based Statistics Estimation

  • Lei Liu
  • Chengting Yu
  • Kainan Wang
  • Aili Wang

Spiking Neural Networks (SNNs) represent a promising avenue for energy-efficient neuromorphic computing. Despite their potential, SNNs typically underperform compared to Artificial Neural Networks (ANNs) due to their complex spatio-temporal dynamics. To improve learning in these networks, researchers have developed various approaches that account for their unique characteristics—among them, normalization techniques have proven especially important. Recently, online learning algorithms have been explored for SNN training as they update network weights using only temporally local information, avoiding the high memory demands associated with Backpropagation Through Time (BPTT). However, the computational mechanism of online learning, which relies on temporally local information to update weights, hinders the application of integrating effective normalization techniques tailored for SNNs. In this work, we propose a Time-based Statistics Estimation (TSE) method to address limitations in existing normalization strategies for SNNs. We begin by establishing a systematic link between overall statistics and time-step-specific ones, leveraging the decomposability of key statistical measures. This insight allows our proposed TSE method to reliably estimate overall statistics using only recent iterations. Furthermore, the proposed method is compatible with both BPTT and online learning, consistently yielding strong performance across learning paradigms. Experiments on CIFAR-10, CIFAR-100, ImageNet, and DVS-CIFAR10 datasets demonstrate the superior performance of our method on both static and neuromorphic datasets. In particular, our method achieves state-of-the-art performance in online learning for SNN training.

NeurIPS Conference 2024 Conference Paper

Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation

  • Chengting Yu
  • Lei Liu
  • Gaoang Wang
  • Erping Li
  • Aili Wang

Recent insights have revealed that rate-coding is a primary form of information representation captured by surrogate-gradient-based Backpropagation Through Time (BPTT) in training deep Spiking Neural Networks (SNNs). Motivated by these findings, we propose rate-based backpropagation, a training strategy specifically designed to exploit rate-based representations to reduce the complexity of BPTT. Our method minimizes reliance on detailed temporal derivatives by focusing on averaged dynamics, streamlining the computational graph to reduce memory and computational demands of SNNs training. We substantiate the rationality of the gradient approximation between BPTT and the proposed method through both theoretical analysis and empirical observations. Comprehensive experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS validate that our method achieves comparable performance to BPTT counterparts, and surpasses state-of-the-art efficient training techniques. By leveraging the inherent benefits of rate-coding, this work sets the stage for more scalable and efficient SNNs training within resource-constrained environments.