Arrow Research search

Author name cluster

Chunlin Yu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

JBHI Journal 2026 Journal Article

Rethinking Feature Interactions for Medical Image Segmentation: A Unified Hierarchical Aggregation Framework with Boundary Guidance

  • Chunlin Yu
  • Yinhao Li
  • Jiaxun Li
  • Zheng Zhao
  • Taohong Zhang

Medical image segmentation is a crucial task of medical image analysis and computer vision. Medical images, compared to natural ones, contain more complex semantic information, making feature learning more challenging. Existing encoder-decoder architectures are limited by inadequate cross-scale interaction and insufficient boundary modeling in their feature fusion designs. To address this, we propose a Hierarchical Feature Interaction network with Boundary guidance (HFIBNet), which unifies dynamic cross-level feature fusion and explicit edge supervision within a coarse-to-fine segmentation framework. Specifically, we introduce a Boundary Prediction (BP) module to extract boundary-aware features that guide the fusion process. A Cross-Level Feature Fusion (CLFF) module is designed to promote semantic interaction across adjacent encoder stages, while the Edge Feature Aggregation (EFA) module propagates boundary cues hierarchically to enhance structural consistency. Furthermore, a Partially Parallel Decoder (PPD) generates a coarse global prediction, which is progressively refined by a Global-Local Feature Enrichment (GLFE) module, mimicking the clinical annotation workflow from coarse localization to fine delineation. Extensive experiments on ten public medical segmentation datasets across four distinct tasks demonstrate that HFIBNet consistently outperforms existing state-of-the-art methods. The code is available available at https://github.com/ukeLin/HFIBNet.

AAAI Conference 2024 Conference Paper

HybridGait: A Benchmark for Spatial-Temporal Cloth-Changing Gait Recognition with Hybrid Explorations

  • Yilan Dong
  • Chunlin Yu
  • Ruiyang Ha
  • Ye Shi
  • Yuexin Ma
  • Lan Xu
  • Yanwei Fu
  • Jingya Wang

Existing gait recognition benchmarks mostly include minor clothing variations in the laboratory environments, but lack persistent changes in appearance over time and space. In this paper, we propose the first in-the-wild benchmark CCGait for cloth-changing gait recognition, which incorporates diverse clothing changes, indoor and outdoor scenes, and multi-modal statistics over 92 days. To further address the coupling effect of clothing and viewpoint variations, we propose a hybrid approach HybridGait that exploits both temporal dynamics and the projected 2D information of 3D human meshes. Specifically, we introduce a Canonical Alignment Spatial-Temporal Transformer (CA-STT) module to encode human joint position-aware features, and fully exploit 3D dense priors via a Silhouette-guided Deformation with 3D-2D Appearance Projection (SilD) strategy. Our contributions are twofold: we provide a challenging benchmark CCGait that captures realistic appearance changes over expanded time and space, and we propose a hybrid framework HybridGait that outperforms prior works on CCGait and Gait3D benchmarks. Our project page is available at https://github.com/HCVLab/HybridGait.

NeurIPS Conference 2023 Conference Paper

Contextually Affinitive Neighborhood Refinery for Deep Clustering

  • Chunlin Yu
  • Ye Shi
  • Jingya Wang

Previous endeavors in self-supervised learning have enlightened the research of deep clustering from an instance discrimination perspective. Built upon this foundation, recent studies further highlight the importance of grouping semantically similar instances. One effective method to achieve this is by promoting the semantic structure preserved by neighborhood consistency. However, the samples in the local neighborhood may be limited due to their close proximity to each other, which may not provide substantial and diverse supervision signals. Inspired by the versatile re-ranking methods in the context of image retrieval, we propose to employ an efficient online re-ranking process to mine more informative neighbors in a Contextually Affinitive (ConAff) Neighborhood, and then encourage the cross-view neighborhood consistency. To further mitigate the intrinsic neighborhood noises near cluster boundaries, we propose a progressively relaxed boundary filtering strategy to circumvent the issues brought by noisy neighbors. Our method can be easily integrated into the generic self-supervised frameworks and outperforms the state-of-the-art methods on several popular benchmarks.

AAAI Conference 2023 Conference Paper

Lifelong Person Re-identification via Knowledge Refreshing and Consolidation

  • Chunlin Yu
  • Ye Shi
  • Zimo Liu
  • Shenghua Gao
  • Jingya Wang

Lifelong person re-identification (LReID) is in significant demand for real-world development as a large amount of ReID data is captured from diverse locations over time and cannot be accessed at once inherently. However, a key challenge for LReID is how to incrementally preserve old knowledge and gradually add new capabilities to the system. Unlike most existing LReID methods, which mainly focus on dealing with catastrophic forgetting, our focus is on a more challenging problem, which is, not only trying to reduce the forgetting on old tasks but also aiming to improve the model performance on both new and old tasks during the lifelong learning process. Inspired by the biological process of human cognition where the somatosensory neocortex and the hippocampus work together in memory consolidation, we formulated a model called Knowledge Refreshing and Consolidation (KRC) that achieves both positive forward and backward transfer. More specifically, a knowledge refreshing scheme is incorporated with the knowledge rehearsal mechanism to enable bi-directional knowledge transfer by introducing a dynamic memory model and an adaptive working model. Moreover, a knowledge consolidation scheme operating on the dual space further improves model stability over the long-term. Extensive evaluations show KRC’s superiority over the state-of-the-art LReID methods with challenging pedestrian benchmarks. Code is available at https://github.com/cly234/LReID-KRKC.