Arrow Research search

Author name cluster

Feifei Cui

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
1 author row

Possible papers

6

AAAI Conference 2026 Conference Paper

EccoMamba: Enhanced Cross-hierarchical Continuity Orthogonal Mamba for Medical Image Segmentation

  • Junlin Xu
  • Jincan Li
  • Feifei Cui
  • Zhuang Zhang
  • Jialiang Yang
  • Shuting Jin
  • Qiangguo Jin
  • Yajie Meng

Medical image segmentation plays a crucial role in clinical diagnosis, lesion quantification, and preoperative planning. However, existing Mamba-based architectures, which rely on fixed-direction sequence modeling and flatten images into one-dimensional (1D) sequences, struggle to capture hierarchical anatomical features and spatial dependencies, thereby limiting their representational capacity for complex medical structures. To address these limitations, we propose EccoMamba (Enhanced Cross-hierarchical Continuity Orthogonal Mamba), a U-shaped encoder--decoder framework designed for medical image segmentation. In the encoder's downsampling path, we introduce a Hierarchical Aggregation Enhancement (HAE) module that integrates multi-scale convolutions with hierarchical attention mechanisms. The attention branch further incorporates cross-channel interactions, allowing the model to selectively enhance semantically relevant features while suppressing irrelevant background responses. For skip connections, we design a Structural Continuity Orthogonal (SCO) module to preserve spatial continuity by modeling cross-dimensional dependencies via orthogonal Axial Shifts (AS), thereby mitigating directional bias and improving anatomical consistency. Extensive experiments on four benchmark datasets---ISIC 2018, ISIC 2017, Synapse, and ACDC---show that EccoMamba consistently outperforms state-of-the-art methods in both segmentation accuracy and structural fidelity.

JBHI Journal 2026 Journal Article

FusionMVSA: Multi-View Fusion Strategy With Self-Attention for Enhancing Drug Recommendation

  • Yajie Meng
  • Zhuang Zhang
  • Xudong Shang
  • Xianfang Tang
  • Jincan Li
  • Zilong Zhang
  • Feifei Cui
  • Shuting Jin

Leveraging the wealth of biomedical data available, we can derive insights into the relationships between biological entities from various angles. This underscores the complexity and significance of developing a dynamic approach for integrating data from multiple sources, a critical endeavor in drug recommendation. In this study, we introduce an innovative deep learning approach termed “Multi-View Fusion Strategy with Self-Attention” (FusionMVSA), designed to predict associations between drugs and diseases. To effectively amalgamate data from diverse sources and extract representative features, we have developed a feature extraction mechanism that capitalizes on similarities. This mechanism computes self-attention across multiple perspectives using shared group parameters, thereby highlighting common characteristics. Simultaneously, we utilize biomedical similarities among multi-source data as guiding factors for calculating similarity, enabling the capture of more nuanced features. Subsequently, we integrate these features through a feature fusion process, where known associations between drugs and diseases act as guiding terms. This strategy allows us to uncover the complementary aspects of different viewpoints. Ultimately, we predict potential drug-disease associations using a multi-layer perceptron neural network. Our methodology has undergone rigorous testing through various cross-validation experiments and case studies. We are confident that FusionMVSA will prove to be a valuable tool in drug recommendation, offering new avenues for exploration and discovery in the quest to combat diseases.

AAAI Conference 2026 Conference Paper

Generalizable Drug–Target Interaction Prediction via ESM-2 Representations and Progressive Contrastive Curriculum Learning

  • Qianyang Wu
  • Jingwei Lv
  • Zilong Zhang
  • Feifei Cui

Predicting drug–target interactions (DTIs) is a fundamental task in computational drug discovery, yet it remains challenging under distribution shifts and limited training data. Existing approaches often suffer from poor generalization, weak cross-modal alignment between molecular and protein representations, and vulnerability to noisy supervision.We propose ESP-DTI, a unified framework designed to enhance generalization by integrating large-scale protein language models with curriculum learning and cross-modal contrastive alignment. Specifically, we leverage ESM-2 to encode context-aware protein representations and adopt a CLIP-style contrastive objective to align drug and protein embeddings in a shared latent space. To further improve learning robustness, we introduce a progressive curriculum sampling strategy that dynamically schedules training instances based on model confidence, enabling a gradual shift from easy to hard examples.Experimental results on four benchmark datasets demonstrate that ESP-DTI consistently outperforms state-of-the-art baselines, achieving a +3.1% improvement in average accuracy. Ablation studies confirm the complementary benefits of each component, validating their collective contribution to robust and generalizable DTI prediction.Our work underscores the effectiveness of combining pretrained protein language models with structured training curricula and cross-modal contrastive learning for reliable DTI prediction under real-world, distribution-shifted conditions.

AAAI Conference 2026 Conference Paper

TLAGC: Taylor Linear Attention-Guided Graph Convolutions for Revealing Spatial Domains in Spatial Multi-Omics Data

  • Aoyun Geng
  • Chunyan Cui
  • Yunyun Su
  • Zhenjie Luo
  • Feifei Cui
  • Zilong Zhang

With the rapid advance of spatial multi-omics technologies, it has become possible to simultaneously profile transcripts, proteins and chromatin states at their native spatial coordinates, thereby uncovering molecular architecture that transcends any single-omics perspective. However, the resulting data matrices are often highly sparse and suffer from unstable dimensionality. Graph-based neural methods capture only local neighborhood information, whereas conventional Transformers, although capable of modelling long-range dependencies, incur prohibitive computational costs on such data. To overcome these limitations, we propose TLAGC—a Taylor-Linear-Attention-Guided Graph Convolutional framework that couples a Taylor-expanded linear attention (TLA) mechanism with graph convolutional networks. By eliminating the soft-max operation and linking the LocalGCN via residual connections, TLA preserves local structural information while enabling the integration of global and local contexts, thereby alleviating ineffective information propagation between spatially distant yet transcriptionally similar regions. Theoretical analysis confirms that TLA indeed reduces computational complexity, and extensive experiments on multiple spatial multi-omics benchmarks demonstrate that TLAGC consistently outperforms state-of-the-art baselines in delineating spatial domains.

JBHI Journal 2025 Journal Article

GCNLA: Inferring Cell-Cell Interactions From Spatial Transcriptomics With Long Short-Term Memory and Graph Convolutional Networks

  • Chao Yang
  • Xiuhao Fu
  • Zhenjie Luo
  • Leyi Wei
  • Jingbing Li
  • Feifei Cui
  • Quan Zou
  • Qingchen Zhang

Spatial transcriptomics analysis methods offer an opportunity to investigate highly diverse biological tissues. Cell-cell communication is fundamental for maintaining physiological homeostasis in organisms and coordinating complex biological processes. Identifying cell-cell interactions is critical for understanding cellular activities. The interaction of a cell with other cells depends on several factors, and most of the existing methods that consider only gene expression information of neighbouring cells and spatial location information are somewhat limited. In this paper, we propose a network architecture based on graph convolution network and long short-term memory attention module-GCNLA, which contains graph convolution layer, long short-term memory network, attention module, and residual connections. GCNLA not only learns the spatial structure of cells but also captures interaction information between distal cells, the attention module further extracting and enhancing features related to cell-cell interactions. Finally, the inner product decoding calculates the cosine similarity, which is used to infer cell-cell interactions. In addition, GCNLA is capable of reconstructing the complete cell-cell interaction network. The experimental results on seqFISH and MERFISH demonstrate that the GCNLA network structure has better robustness and noise immunity. The potential features learned by GCNLA enable other downstream analyses, including single-cell resolution cell clustering based on spatial information resolving cell heterogeneity.