Arrow Research search

Author name cluster

Huabao Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

JBHI Journal 2025 Journal Article

A Reliable Multi-Stage System for Tooth Instance Segmentation and Numbering in Panoramic Radiographs

  • Yani Zhang
  • Yuzhou Yu
  • Qiankun Li
  • Huabao Chen
  • Yuci Liang
  • Junxin Chen
  • Jiayue Yin

Accurate tooth instance segmentation and identification in panoramic radiographs is a fundamental prerequisite for automated dental diagnosis and treatment planning. Existing methods often rely on static segmentation pipelines that overlook anatomical symmetry and fail to address structural anomalies such as supernumerary or missing teeth. To this end, we propose a reliable multi-stage framework for clinically applicable tooth instance segmentation and FDI-compliant numbering. The system comprises three stages: a supernumerary tooth classifier to guide adaptive routing, a core instance segmentor for tooth-level delineation and numbering, and a missing tooth detector to ensure index completeness via post-hoc correction. Central to our design is the Symmetry-Aware Dual-branch Pyramid Network (SADP-Net), which explicitly models bilateral structures and scale variations through a Symmetry-Aware Module (SymAM) and a Dual-Branch Pyramid (DBP) architecture. Extensive experiments on three datasets, including Tufts Dental, O 2 PR, and a multi-pathology cohort, demonstrate superior performance compared to state-of-the-art baselines. Ablation studies further validate the contributions of each proposed component in enhancing boundary localization, numbering consistency, and robustness to anatomical variability. Our framework provides a scalable and interpretable solution for real-world dental imaging systems. Codes link: https://github.com/qklee-lz/SADP-Net.

NeurIPS Conference 2025 Conference Paper

Unleashing Foundation Vision Models: Adaptive Transfer for Diverse Data-Limited Scientific Domains

  • Qiankun Li
  • Feng He
  • Huabao Chen
  • Xin Ning
  • Kun Wang
  • Zengfu Wang

In the big data era, the computer vision field benefits from large-scale datasets such as LAION-2B, LAION-400M, and ImageNet-21K, Kinetics, on which popular models like the ViT and ConvNeXt series have been pre-trained, acquiring substantial knowledge. However, numerous downstream tasks in specialized and data-limited scientific domains continue to pose significant challenges. In this paper, we propose a novel Cluster Attention Adapter (CLAdapter), which refines and adapts the rich representations learned from large-scale data to various data-limited downstream tasks. Specifically, CLAdapter introduces attention mechanisms and cluster centers to personalize the enhancement of transformed features through distribution correlation and transformation matrices. This enables models fine-tuned with CLAdapter to learn distinct representations tailored to different feature sets, facilitating the models' adaptation from rich pre-trained features to various downstream scenarios effectively. In addition, CLAdapter's unified interface design allows for seamless integration with multiple model architectures, including CNNs and Transformers, in both 2D and 3D contexts. Through extensive experiments on 10 datasets spanning domains such as generic, multimedia, biological, medical, industrial, agricultural, environmental, geographical, materials science, out-of-distribution (OOD), and 3D analysis, CLAdapter achieves state-of-the-art performance across diverse data-limited scientific domains, demonstrating its effectiveness in unleashing the potential of foundation vision models via adaptive transfer. Code is available at https: //github. com/qklee-lz/CLAdapter.

JBHI Journal 2024 Journal Article

Embracing Large Natural Data: Enhancing Medical Image Analysis via Cross-Domain Fine-Tuning

  • Qiankun Li
  • Xiaolong Huang
  • Bo Fang
  • Huabao Chen
  • Siyuan Ding
  • Xu Liu

With the rapid advancements of Big Data and computer vision, many large-scale natural visual datasets are proposed, such as ImageNet-21K, LAION-400M, and LAION-2B. These large-scale datasets significantly improve the robustness and accuracy of models in the natural vision domain. However, the field of medical images continues to face limitations due to relatively small-scale datasets. In this article, we propose a novel method to enhance medical image analysis across domains by leveraging pre-trained models on large natural datasets. Specifically, a Cross-Domain Transfer Module (CDTM) is proposed to transfer natural vision domain features to the medical image domain, facilitating efficient fine-tuning of models pre-trained on large datasets. In addition, we design a Staged Fine-Tuning (SFT) strategy in conjunction with CDTM to further improve the model performance. Experimental results demonstrate that our method achieves state-of-the-art performance on multiple medical image datasets through efficient fine-tuning of models pre-trained on large natural datasets.