Arrow Research search

Author name cluster

Guangjun Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2026 Conference Paper

Class Incremental Medical Image Segmentation via Prototype-Guided Calibration and Dual-Aligned Distillation

  • Shengqian Zhu
  • Chengrong Yu
  • Qiang Wang
  • Ying Song
  • Guangjun Li
  • Jiafei Wu
  • Xiaogang Xu
  • Zhang Yi

Class incremental medical image segmentation (CIMIS) aims to preserve knowledge of previously learned classes while learning new ones without relying on old-class annotations. However, existing methods 1) either adopt one-size-fits-all strategies that treat all spatial regions and feature channels equally, which may hinder the preservation of accurate old knowledge, 2) or focus solely on aligning local prototypes with global ones for old classes while overlooking their local representations in new data, leading to knowledge degradation. To mitigate the above issues, we propose Prototype-Guided Calibration Distillation (PGCD) and Dual-Aligned Prototype Distillation (DAPD) for CIMIS in this paper. Specifically, PGCD exploits prototype-to-feature similarity to calibrate class-specific distillation intensity in different spatial regions, effectively reinforcing reliable old knowledge and suppressing misleading cues from old classes. Complementarily, DAPD aligns the local prototypes of old classes extracted from the current model with both global historical prototypes and local prototypes, further enhancing segmentation performance on old categories. Comprehensive evaluations on two widely used multi-organ segmentation benchmarks demonstrate that our method outperforms current state-of-the-art methods, highlighting its robustness and generalization capabilities.

JBHI Journal 2026 Journal Article

Rethinking Propagation Methods for Interactive Medical Image Segmentation

  • Shengqian Zhu
  • Yuncheng Shen
  • Yingyong Yin
  • Ying Song
  • Zhang Yi
  • Guangjun Li
  • Junjie Hu

Propagation-based methods have drawn increasing research attention in interactive medical image segmentation. However, existing propagation-based methods face two significant challenges: 1) Due tothe continuous nature of anatomical structures within the organs and tumors throughout the volume, over-propagation is likely to occur as the propagation process reaches the end of structures, leadingto a degradation in segmentation performance. 2) During the multi-round refinement process, selecting the worst-segmented slice for refinement tends to hinder the optimization of segmentation results. To overcome these challenges, we propose the Discrepancy Aware Network (DANet), which includes a Discrepancy Learning Module (DLM) and employs a confidence loss to achieve accurate segmentation. Specifically, DLM captures the temporal-contextual discrepancy between previous and current slices, enabling the model to perceive the variations of the target. Furthermore, the confidence loss is responsible for regularizing the over-confident segmentation at the image level by estimating the target foreground. Additionally, we design a straightforward slice selection strategy to optimize the refinement process. Extensive experimental results on five public medical datasets demonstrate significant improvements over state-of-the-art methods (e. g. , with +1. 07% improvement on the MSD-Spleen dataset).

JBHI Journal 2022 Journal Article

Deep Neural Network With Structural Similarity Difference and Orientation-Based Loss for Position Error Classification in the Radiotherapy of Graves’ Ophthalmopathy Patients

  • Wenjie Liu
  • Lei Zhang
  • Guyu Dai
  • Xiangbin Zhang
  • Guangjun Li
  • Zhang Yi

Identifying position errors for Graves’ ophthalmopathy (GO) patients using electronic portal imaging device (EPID) transmission fluence maps is helpful in monitoring treatment. However, most of the existing models only extract features from dose difference maps computed from EPID images, which do not fully characterize all information of the positional errors. In addition, the position error has a three-dimensional spatial nature, which has never been explored in previous work. To address the above problems, a deep neural network (DNN) model with structural similarity difference and orientation-based loss is proposed in this paper, which consists of a feature extraction network and a feature enhancement network. To capture more information, three types of Structural SIMilarity (SSIM) sub-index maps are computed to enhance the luminance, contrast, and structural features of EPID images, respectively. These maps and the dose difference maps are fed into different networks to extract radiomic features. To acquire spatial features of the position errors, an orientation-based loss function is proposed for optimal training. It makes the data distribution more consistent with the realistic 3D space by integrating the error deviations of the predicted values in the left-right, superior-inferior, anterior-posterior directions. Experimental results on a constructed dataset demonstrate the effectiveness of the proposed model, compared with other related models and existing state-of-the-art methods.