Arrow Research search

Author name cluster

Yanfeng Lu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

AAAI Conference 2026 Conference Paper

ChipMind: Retrieval-Augmented Reasoning for Long-Context Circuit Design Specifications

  • Changwen Xing
  • SamZaak Wong
  • Xinlai Wan
  • Yanfeng Lu
  • Mengli Zhang
  • Zebin Ma
  • Lei Qi
  • Zhengxiong Li

While Large Language Models (LLMs) demonstrate immense potential for automating integrated circuit (IC) development, their practical deployment is fundamentally limited by restricted context windows. Existing context-extension methods struggle to achieve effective semantic modeling and thorough multi-hop reasoning over extensive, intricate circuit specifications. To address this, we introduce ChipMind, a novel knowledge graph-augmented reasoning framework specifically designed for lengthy IC specifications. ChipMind first transforms circuit specifications into a domain-specific knowledge graph (ChipKG) through the Circuit Semantic-Aware Knowledge Graph Construction methodology. It then leverages the ChipKG-Augmented Reasoning mechanism, combining information-theoretic adaptive retrieval to dynamically trace logical dependencies with intent-aware semantic filtering to prune irrelevant noise, effectively balancing retrieval completeness and precision. Evaluated on an industrial-scale specification reasoning benchmark, ChipMind significantly outperforms state-of-the-art baselines, achieving an average improvement of 34.59% (up to 72.73%). Our framework bridges a critical gap between academic research and practical industrial deployment of LLM-aided Hardware Design (LAD).

AAAI Conference 2026 Conference Paper

Temporal Dynamics Enhancer for Directly Trained Spiking Object Detectors

  • Fan Luo
  • Zeyu Gao
  • Xinhao Luo
  • Kai Zhao
  • Yanfeng Lu

Spiking Neural Networks (SNNs), with their brain-inspired spatiotemporal dynamics and spike-driven computation, have emerged as promising energy-efficient alternatives to Artificial Neural Networks (ANNs). However, existing SNNs typically replicate inputs directly or aggregate them into frames at fixed intervals. Such strategies lead to neurons receiving nearly identical stimuli across time steps, severely limiting the model's expressive power—particularly in complex tasks like object detection. In this work, we propose the Temporal Dynamics Enhancer (TDE) to strengthen SNNs' capacity for temporal information modeling. TDE consists of two modules: a Spiking Encoder (SE) that generates diverse input stimuli across time steps, and an Attention Gating Module (AGM) that guides the SE generation based on inter-temporal dependencies. Moreover, to eliminate the high-energy multiplication operations introduced by the AGM, we propose a Spike-Driven Attention (SDA) to reduce attention-related energy consumption. Extensive experiments demonstrate that TDE can be seamlessly integrated into existing SNN-based detectors and consistently outperforms state-of-the-art methods, achieving mAP@50-95 scores of 57.7% on the static PASCAL VOC dataset and 47.6% on the neuromorphic EvDET200K dataset. In terms of energy consumption, the SDA consumes only 0.240× the energy of conventional attention modules.

IROS Conference 2025 Conference Paper

Vision-Language Navigation with Continual Learning for Unseen Environments

  • ZhiYuan Li
  • Yanfeng Lu
  • Di Shang
  • Ziqin Tu
  • Hong Qiao

Vision-language navigation (VLN) is a pivotal area within embodied intelligence, where agents must navigate based on natural language instructions. While traditional VLN research has focused on enhancing environmental comprehension and decision-making policy, these methods often reveal substantial performance gaps when agents are deployed in novel environments. This issue primarily arises from the lack of diverse training data. Expanding datasets to encompass a broader range of environments is impractical and costly. To address this challenge, we propose Vision-Language Navigation with Continuous Learning (VLNCL), a framework that allows agents to learn from new environments while preserving previous knowledge incrementally. We introduce a novel dual-loop scenario replay method (Dual-SR) inspired by brain memory mechanisms integrated with VLN agents. This approach helps consolidate past experiences and improves generalization across novel tasks. As a result, the agent exhibits enhanced adaptability to new environments and mitigates catastrophic forgetting. Our experiment demonstrates that VLN agents with Dual-SR effectively resist forgetting and adapt to unfamiliar environments. Combining VLN with continual learning significantly boosts the performance of otherwise average models, achieving SOTA results.

IROS Conference 2024 Conference Paper

Spike-based high energy efficiency and accuracy tracker for Robot

  • Jinye Qu
  • Zeyu Gao
  • Yi Li
  • Yanfeng Lu
  • Hong Qiao

Spiking Neural Networks (SNNs) have gained attention for their apparent energy efficiency and significant biological interpretability, although they also face significant challenges such as prolonged latency and suboptimal tracking accuracy. Recent studies have explored the application of SNNs in object tracking tasks. Dynamic visual sensors (DVS) have become a popular way to implement SNN-based object tracking due to their asynchronous and spiking characteristics similar to SNNs. However, challenges such as the high cost of DVS cameras and the lack of object surface texture information hinder the utility and performance of DVS trackers. In contrast, RGB information has inherent advantages, including low acquisition cost and comprehensive object surface texture representation. However, RGB information is prone to excessive image blurring in low-light conditions or in fast-motion scenes. To address these challenges, we propose the “Motion Feature Extractor” and the "RGB-DVS Fusion Module". The “Motion Feature Extractor” can replace the DVS camera at a very low cost, and the "RGB-DVS Fusion Module" can deeply fuse the feature information of the two to make up for their respective deficiencies. In addition, we adopt a conversion method to obtain a lossless SNN version of the model. Through experiments, our model achieves a 13. 6% improvement in the expected average overlap (EAO) index using only 1. 47% of the energy consumption of SiamRPN (VOT2016 dataset). In addition, we deployed the model to a robot and then conducted tracking experiments, which confirmed that the model can operate on the robot losslessly with satisfactory results.