Arrow Research search

Author name cluster

Yuxia Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2026 Conference Paper

HiFi-Mamba: Dual-Stream?-Laplacian Enhanced Mamba for High-Fidelity MRI Reconstruction

  • Hongli Chen
  • Pengcheng Fang
  • Yuxia Chen
  • Yingxuan Ren
  • Jing Hao
  • Fangfang Tang
  • Xiaohao Cai
  • Shanshan Shan

Reconstructing high-fidelity MR images from undersampled k-space data remains a challenging problem in MRI. While Mamba variants for vision tasks offer promising long-range modeling capabilities with linear-time complexity, their direct application to MRI reconstruction inherits two key limitations: (1) insensitivity to high-frequency anatomical details; and (2) reliance on redundant multi-directional scanning. To address these limitations, we introduce High-Fidelity Mamba (HiFi-Mamba), a novel dual-stream Mamba-based architecture comprising stacked?-Laplacian (WL) and HiFi-Mamba blocks. Specifically, the WL block performs fidelity-preserving spectral decoupling, producing complementary low- and high-frequency streams. This separation enables the HiFi-Mamba block to focus on low-frequency structures, enhancing global feature modeling. Concurrently, the HiFi-Mamba block selectively integrates high-frequency features through adaptive state-space modulation, preserving comprehensive spectral details. To eliminate the scanning redundancy, the HiFi-Mamba block adopts a streamlined unidirectional traversal strategy that preserves long-range modeling capability with improved computational efficiency. Extensive experiments on standard MRI reconstruction benchmarks demonstrate that HiFi-Mamba consistently outperforms state-of-the-art CNN-based, Transformer-based, and other Mamba-based models in reconstruction accuracy while maintaining a compact and efficient model design.

JBHI Journal 2025 Journal Article

DCLA: Deep Cooperative Learning for Advancing Automated Annotation of Electronic Medical Records in Cerebral Palsy

  • Meirong Xiao
  • Qiaofang Pang
  • Xiyuan Yang
  • Yuxia Chen
  • Xiaoying Wu
  • Min Zhong
  • Nong Xiao
  • Wensheng Hou

Automated annotation of electronic medical records for patients with cerebral palsy (CP) is crucial for downstream clinical applications. However, most existing methods lack mechanisms to verify model predictions before their acceptance and suffer from labeled data scarcity. To address this challenge, we propose a Deep Cooperative Learning for Automated Annotation (DCLA) framework. DCLA integrates named entity recognition (NER) and relation extraction (RE) models that employ the multi-head attention mechanism and the global pointer to handle complex entities and relations. Building on this foundation, a cooperative learning (CL) mechanism is introduced to evaluate prediction quality through score matrices for sample ranking and selection. Low-quality predictions are verified by annotators, while high-quality predictions are accepted automatically, enabling iterative retraining with cooperatively labeled data. Experiments on a CP-specific corpus demonstrate that DCLA's NER and RE models outperform state-of-the-art methods, while the CL mechanism enhances proofreading efficiency. Overall, DCLA enhances proofreading efficiency, mitigates data scarcity, and supports continuous model refinement.