Arrow Research search

Author name cluster

Xiaopeng Jin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

JBHI Journal 2025 Journal Article

A Lesion-Fusion Neural Network for Multi-View Diabetic Retinopathy Grading

  • Xiaoling Luo
  • Qihao Xu
  • Zhihua Wang
  • Chao Huang
  • Chengliang Liu
  • Xiaopeng Jin
  • Jianguo Zhang

As the most common complication of diabetes, diabetic retinopathy (DR) is one of the main causes of irreversible blindness. Automatic DR grading plays a crucial role in early diagnosis and intervention, reducing the risk of vision loss in people with diabetes. In these years, various deep-learning approaches for DR grading have been proposed. Most previous DR grading models are trained using the dataset of single-field fundus images, but the entire retina cannot be fully visualized in a single field of view. There are also problems of scattered location and great differences in the appearance of lesions in fundus images. To address the limitations caused by incomplete fundus features, and the difficulty in obtaining lesion information. This work introduces a novel multi-view DR grading framework, which solves the problem of incomplete fundus features by jointly learning fundus images from multiple fields of view. Furthermore, the proposed model combines multi-view inputs such as fundus images and lesion snapshots. It utilizes heterogeneous convolution blocks (HCB) and scalable self-attention classes (SSAC), which enhance the ability of the model to obtain lesion information. The experimental results show that our proposed method performs better than the benchmark methods on the large-scale dataset.

IJCAI Conference 2025 Conference Paper

Enhancing Multimodal Protein Function Prediction Through Dual-Branch Dynamic Selection with Reconstructive Pre-Training

  • Xiaoling Luo
  • Peng Chen
  • Chengliang Liu
  • Xiaopeng Jin
  • Jie Wen
  • Yumeng Liu
  • Junsong Wang

Multimodal protein features play a crucial role in protein function prediction. However, these features encompass a wide range of information, ranging from structural data and sequence features to protein attributes and interaction networks, making it challenging to decipher their complex interconnections. In this work, we propose a multimodal protein function prediction method (DSRPGO) by utilizing dynamic selection and reconstructive pre-training mechanisms. To acquire complex protein information, we introduce reconstructive pre-training to mine more fine-grained information with low semantic levels. Moreover, we put forward the Bidirectional Interaction Module (BInM) to facilitate interactive learning among multimodal features. Additionally, to address the difficulty of hierarchical multi-label classification in this task, a Dynamic Selection Module (DSM) is designed to select the feature representation that is most conducive to current protein function prediction. Our proposed DSRPGO model improves significantly in BPO, MFO, and CCO on human datasets, thereby outperforming other benchmark models.

AAAI Conference 2023 Conference Paper

MVCINN: Multi-View Diabetic Retinopathy Detection Using a Deep Cross-Interaction Neural Network

  • Xiaoling Luo
  • Chengliang Liu
  • Waikeung Wong
  • Jie Wen
  • Xiaopeng Jin
  • Yong Xu

Diabetic retinopathy (DR) is the main cause of irreversible blindness for working-age adults. The previous models for DR detection have difficulties in clinical application. The main reason is that most of the previous methods only use single-view data, and the single field of view (FOV) only accounts for about 13% of the FOV of the retina, resulting in the loss of most lesion features. To alleviate this problem, we propose a multi-view model for DR detection, which takes full advantage of multi-view images covering almost all of the retinal field. To be specific, we design a Cross-Interaction Self-Attention based Module (CISAM) that interfuses local features extracted from convolutional blocks with long-range global features learned from transformer blocks. Furthermore, considering the pathological association in different views, we use the feature jigsaw to assemble and learn the features of multiple views. Extensive experiments on the latest public multi-view MFIDDR dataset with 34,452 images demonstrate the superiority of our method, which performs favorably against state-of-the-art models. To the best of our knowledge, this work is the first study on the public large-scale multi-view fundus images dataset for DR detection.