Arrow Research search

Author name cluster

Yaping Wu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

JBHI Journal 2026 Journal Article

DGRTA: Cross-Modality Unsupervised Domain Adaptation for Intracranial Vessel Segmentation via Dual-Gated Refinement and Topology-Aware Weighting

  • Yijia Zheng
  • Jiahui Lv
  • Yaping Wu
  • Xinsheng Mao
  • Chao Zheng
  • Meiyun Wang
  • Hua Guo

TOF-MRA intracranial vessel segmentation is critical in clinical practice but challenged by limited annotations and significant cross-modality domain shifts. To address these issues, this study proposes DGRTA, an unsupervised domain adaptation framework that integrates cross-modality pseudo-label generation, a dual-gated pseudo-label refinement strategy (DGR), and a topology aware weighting mechanism (TA). Initially, rigid and non-rigid registration are used to transfer CTA predictions to TOF-MRA to generate initial pseudo-labels. DGR then refines these labels using prediction probabilities and image intensity, enhancing sensitivity and specificity, while TA leverages Persistence Diagrams (PD) to quantify topological discrepancies and dynamically adjust loss weights. Experiments on 185 paired CTA/TOF-MRA cases demonstrated that DGRTA consistently improved performance across four backbone architectures (UNet, Attention UNet, UNETR, Swin UNETR). The Attention UNet DGRTA achieved the best results, with a Dice of 0. 810, clDice of 0. 800, and an AHD of 0. 353 mm on the validation set, significantly outperforming the baseline model (p < 0. 001). DGRTA offers a feasible solution that reduces reliance on extensive manual annotations, underscoring the potential of unsupervised cross-modality segmentation in various vascular imaging applications.

JBHI Journal 2025 Journal Article

Automatic Brain Segmentation for PET/MR Dual-Modal Images Through a Cross-Fusion Mechanism

  • Hongyan Tang
  • Zhenxing Huang
  • Wenbo Li
  • Yaping Wu
  • Jianmin Yuan
  • Yang Yang
  • Yan Zhang
  • Jing Qin

The precise segmentation of different brain regions and tissues is usually a prerequisite for the detection and diagnosis of various neurological disorders in neuroscience. Considering the abundance of functional and structural dual-modality information for positron emission tomography/magnetic resonance (PET/MR) images, we propose a novel 3D whole-brain segmentation network with a cross-fusion mechanism introduced to obtain 45 brain regions. Specifically, the network processes PET and MR images simultaneously, employing UX-Net and a cross-fusion block for feature extraction and fusion in the encoder. We test our method by comparing it with other deep learning-based methods, including 3DUXNET, SwinUNETR, UNETR, nnFormer, UNet3D, NestedUNet, ResUNet, and VNet. The experimental results demonstrate that the proposed method achieves better segmentation performance in terms of both visual and quantitative evaluation metrics and achieves more precise segmentation in three views while preserving fine details. In particular, the proposed method achieves superior quantitative results, with a Dice coefficient of 85. 73% $\pm$ 0. 01%, a Jaccard index of 76. 68% $\pm$ 0. 02%, a sensitivity of 85. 00% $\pm$ 0. 01%, a precision of 83. 26% $\pm$ 0. 03% and a Hausdorff distance (HD) of 4. 4885 $\pm$ 14. 85%. Moreover, the distribution and correlation of the SUV in the volume of interest (VOI) are also evaluated (PCC > 0. 9), indicating consistency with the ground truth and the superiority of the proposed method. In future work, we will utilize our whole-brain segmentation method in clinical practice to assist doctors in accurately diagnosing and treating brain diseases.

JBHI Journal 2024 Journal Article

Accurate Whole-Brain Image Enhancement for Low-Dose Integrated PET/MR Imaging Through Spatial Brain Transformation

  • Zhenxing Huang
  • Wenbo Li
  • Yaping Wu
  • Lin Yang
  • Yun Dong
  • Yongfeng Yang
  • Hairong Zheng
  • Dong Liang

Positron emission tomography/magnetic resonance imaging (PET/MRI) systems can provide precise anatomical and functional information with exceptional sensitivity and accuracy for neurological disorder detection. Nevertheless, the radiation exposure risks and economic costs of radiopharmaceuticals may pose significant burdens on patients. To mitigate image quality degradation during low-dose PET imaging, we proposed a novel 3D network equipped with a spatial brain transform (SBF) module for low-dose whole-brain PET and MR images to synthesize high-quality PET images. The FreeSurfer toolkit was applied to derive the spatial brain anatomical alignment information, which was then fused with low-dose PET and MR features through the SBF module. Moreover, several deep learning methods were employed as comparison measures to evaluate the model performance, with the peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and Pearson correlation coefficient (PCC) serving as quantitative metrics. Both the visual results and quantitative results illustrated the effectiveness of our approach. The obtained PSNR and SSIM were $41. 96 \, \pm \, 4. 91$ dB (p $0. 9654 \, \pm \, 0. 0215$ (p < 0. 01), which achieved a 19% and 20% improvement, respectively, compared to the original low-dose brain PET images. The volume of interest (VOI) analysis of brain regions such as the left thalamus (PCC = 0. 959) also showed that the proposed method could achieve a more accurate standardized uptake value (SUV) distribution while preserving the details of brain structures. In future works, we hope to apply our method to other multimodal systems, such as PET/CT, to assist clinical brain disease diagnosis and treatment.

JBHI Journal 2023 Journal Article

A Two-Branch Neural Network for Short-Axis PET Image Quality Enhancement

  • Minghan Fu
  • Meiyun Wang
  • Yaping Wu
  • Na Zhang
  • Yongfeng Yang
  • Haining Wang
  • Yun Zhou
  • Yue Shang

The axial field of view (FOV) is a key factor that affects the quality of PET images. Due to hardware FOV restrictions, conventional short-axis PET scanners with FOVs of 20 to 35 cm can acquire only low-quality PET (LQ-PET) images in fast scanning times (2–3 minutes). To overcome hardware restrictions and improve PET image quality for better clinical diagnoses, several deep learning-based algorithms have been proposed. However, these approaches use simple convolution layers with residual learning and local attention, which insufficiently extract and fuse long-range contextual information. To this end, we propose a novel two-branch network architecture with swin transformer units and graph convolution operation, namely SW-GCN. The proposed SW-GCN provides additional spatial- and channel-wise flexibility to handle different types of input information flow. Specifically, considering the high computational cost of calculating self-attention weights in full-size PET images, in our designed spatial adaptive branch, we take the self-attention mechanism within each local partition window and introduce global information interactions between nonoverlapping windows by shifting operations to prevent the aforementioned problem. In addition, the convolutional network structure considers the information in each channel equally during the feature extraction process. In our designed channel adaptive branch, we use a Watts Strogatz topology structure to connect each feature map to only its most relevant features in each graph convolutional layer, substantially reducing information redundancy. Moreover, ensemble learning is adopted in our SW-GCN for mapping distinct features from the two well-designed branches to the enhanced PET images. We carried out extensive experiments on three single-bed position scans for 386 patients. The test results demonstrate that our proposed SW-GCN approach outperforms state-of-the-art methods in both quantitative and qualitative evaluations.