Arrow Research search

Author name cluster

Linhao Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

AAAI Conference 2026 Conference Paper

Topology-aware Knowledge Preservation for Class-Incremental Learning

  • Han Zang
  • Yongfeng Dong
  • Linhao Li
  • Liang Yang
  • Yu Wang

Class Incremental Learning (CIL) aims to enable models to continually learn new classes while retaining previously learned knowledge. The principal challenge in CIL is catastrophic forgetting, which prior approaches typically address by distilling knowledge from previous model. However, such way is often limited to pairwise alignment, failing to preserve the underlying global manifold structure of feature space—ultimately resulting in semantic drift over time. To capture multi-scale structural patterns in the feature space, we propose a topology-aware distillation framework that leverages persistent homology. Specifically, by enforcing topological alignment across incremental stages, our method ensures structure-consistent knowledge transfer and robust preservation of old classes. Furthermore, we still devise a dual-branch architecture with an inverse sampling and dynamic reweighting mechanism that addresses the inherent data imbalance in standard replay-based frameworks. These innovations coalesce into TaKP (Topology-aware Knowledge Preservation), a unified framework designed to enhance knowledge preservation in CIL. Extensive experiments demonstrate that TaKP achieves state-of-the-art performance on multiple benchmarks, significantly improving old-class preservation and average accuracy.

AAAI Conference 2025 Conference Paper

Adaptive Decision Boundary for Few-Shot Class-Incremental Learning

  • Linhao Li
  • Yongzhang Tan
  • Siyuan Yang
  • Hao Cheng
  • Yongfeng Dong
  • Liang Yang

Few-Shot Class-Incremental Learning (FSCIL) aims to continuously learn new classes from a limited set of training samples without forgetting knowledge of previously learned classes. Conventional FSCIL methods typically build a robust feature extractor during the base training session with abundant training samples and subsequently freeze this extractor, only fine-tuning the classifier in subsequent incremental phases. However, current strategies primarily focus on preventing catastrophic forgetting, considering only the relationship between novel and base classes, without paying attention to the specific decision spaces of each class. To address this challenge, we propose a plug-and-play Adaptive Decision Boundary Strategy (ADBS), which is compatible with most FSCIL methods. Specifically, we assign a specific decision boundary to each class and adaptively adjust these boundaries during training to optimally refine the decision spaces for the classes in each session. Furthermore, to amplify the distinctiveness between classes, we employ a novel inter-class constraint loss that optimizes the decision boundaries and prototypes for each class. Extensive experiments on three benchmarks, namely CIFAR100, miniImageNet, and CUB200, demonstrate that incorporating our ADBS method with existing FSCIL techniques significantly improves performance, achieving overall state-of-the-art results.

JBHI Journal 2023 Journal Article

Meta-Probability Weighting for Improving Reliability of DNNs to Label Noise

  • Zhen Wang
  • Shuo Jin
  • Linhao Li
  • Yongfeng Dong
  • Qinghua Hu

Training noise-robust deep neural networks (DNNs) in label noise scenario is a crucial task. In this paper, we first demonstrates that the DNNs learning with label noise exhibits over-fitting issue on noisy labels because of the DNNs is too confidence in its learning capacity. More significantly, however, it also potentially suffers from under-learning on samples with clean labels. DNNs essentially should pay more attention on the clean samples rather than the noisy samples. Inspired by the sample-weighting strategy, we propose a meta-probability weighting (MPW) algorithm which re-weights the output probability of DNNs to prevent DNNs from over-fitting to label noise and alleviate the under-learning issue on the clean sample. MPW conducts an approximation optimization to adaptively learn the probability weights from data under the supervision of a small clean dataset, and achieves iterative optimization between probability weights and network parameters via meta-learning paradigm. The ablation studies substantiate the effectiveness of MPW to prevent the deep neural networks from overfitting to label noise and improve the learning capacity on clean samples. Furthermore, MPW achieves competitive performance with other state-of-the-art methods on both synthetic and real-world noises.

AAAI Conference 2021 Conference Paper

Dynamic Anchor Learning for Arbitrary-Oriented Object Detection

  • Qi Ming
  • Zhiqiang Zhou
  • Lingjuan Miao
  • Hongwei Zhang
  • Linhao Li

Arbitrary-oriented objects widely appear in natural scenes, aerial photographs, remote sensing images, etc. , and thus arbitrary-oriented object detection has received considerable attention. Many current rotation detectors use plenty of anchors with different orientations to achieve spatial alignment with ground truth boxes. Intersection-over-Union (IoU) is then applied to sample the positive and negative candidates for training. However, we observe that the selected positive anchors cannot always ensure accurate detections after regression, while some negative samples can achieve accurate localization. It indicates that the quality assessment of anchors through IoU is not appropriate, and this further leads to inconsistency between classification confidence and localization accuracy. In this paper, we propose a dynamic anchor learning (DAL) method, which utilizes the newly defined matching degree to comprehensively evaluate the localization potential of the anchors and carries out a more efficient label assignment process. In this way, the detector can dynamically select high-quality anchors to achieve accurate object detection, and the divergence between classification and regression will be alleviated. With the newly introduced DAL, we can achieve superior detection performance for arbitrary-oriented objects with only a few horizontal preset anchors. Experimental results on three remote sensing datasets HRSC2016, DOTA, UCAS-AOD as well as a scene text dataset ICDAR 2015 show that our method achieves substantial improvement compared with the baseline model. Besides, our approach is also universal for object detection using horizontal bound box. The code and models are available at https: //github. com/ming71/DAL.