Arrow Research search

Author name cluster

Lanting Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

EAAI Journal 2026 Journal Article

CGMAE: Self-supervised Masked Auto-Encoder with Cross-Graph node alignment for node classification

  • Ruoxian Song
  • Peng Cao
  • Guangqi Wen
  • Lanting Li
  • Wei Liang
  • Weiping Li
  • Jinzhu Yang
  • Osmar R. Zaiane

Masked Auto-Encoder (MAE) is widely adopted for node classification by recovering the randomly masked graph structure or node attributes. However, traditional MAE methods face two critical challenges: (1) features learned for reconstruction may not align with the downstream classification task, and (2) masking edges risks distorting inherent semantic relationships, degrading representation quality. To overcome these limitations, we propose a simple yet effective self-supervised M asked A uto- E ncoder with C ross- G raph node alignment (CGMAE) for node classification. It leverages labeled nodes from an auxiliary graph to enhance discriminative feature learning in an unlabeled target graph, bridging the task gap between reconstruction and classification. CGMAE introduces a node-level alignment mechanism to address distribution shifts across graphs. This design jointly learns structural patterns and node attributes through specific encoders, enabling multi-view feature matching to refine node representations. Furthermore, CGMAE innovatively predicts masked target edges using aligned nodes from the auxiliary graph, preserving the semantic relationships during reconstruction. Extensive experiments on six diverse networks (standard, complex, sparse, and large-scale graphs) verify the effectiveness and robustness of the proposed method in self-supervised/unsupervised node classification tasks, with accuracy improvements ranging from 1. 5%/1. 1% to 3. 5%/7. 0% over state-of-the-art methods. Code is available at https: //github. com/songruoxian/CGMAE.

JBHI Journal 2025 Journal Article

An Efficient Transfer Learning With Prompt Learning for Brain Disorders Diagnosis

  • Liuzeng Zhang
  • Lanting Li
  • Peng Cao
  • Jinzhu Yang
  • Osmar R. Zaiane
  • Fei Wang

The limited availability of training data significantly restricts the performance of deep supervised models for brain disease diagnosis. It is crucial to develop a learning framework through cross-disease transfer learning that can extract more information from the limited data. To address this challenge, we concentrate on prompt learning and endeavor to extend its application to the brain networks. Specifically, we propose a novel prompt learning framework called BPformer, which integrates knowledge transferred across diseases via specific prompts while keeping the original architecture of BPformer unchanged. The specific prompts incorporate 1) a mask prompt to determine whether the edges are noisy or discriminating, 2) disorder prompts for modeling consistent and disorder-specific knowledge, and 3) adaptive instance-level prompts to account for inter-individual variations. We evaluate BPformer on the private center Nanjing Medical University dataset, the public Autism Brain Imaging Data Exchange dataset, and the public Alzheimer's Disease Neuroimaging Initiative dataset. We demonstrate the effectiveness of the proposed model across various disease classification tasks, including major depressive disorder, bipolar disorder, alzheimer's disease, and autism spectrum disorder diagnoses. In addition, the proposed method enables disease interpretability and subtype analysis, empowering physicians to provide patients with more accurate and fine-grained treatment plans.

ICML Conference 2024 Conference Paper

Accelerating Iterative Retrieval-augmented Language Model Serving with Speculation

  • Zhihao Zhang 0001
  • Alan Zhu 0001
  • Lijie Yang 0003
  • Yihua Xu
  • Lanting Li
  • Phitchaya Mangpo Phothilimthana
  • Zhihao Jia

This paper introduces RaLMSpec, a framework that accelerates iterative retrieval-augmented language model (RaLM) with speculative retrieval and batched verification. RaLMSpec further introduces several important systems optimizations, including prefetching, optimal speculation stride scheduler, and asynchronous verification. The combination of these techniques allows RaLMSPec to significantly outperform existing systems. For document-level iterative RaLM serving, evaluation over three LLMs on four QA datasets shows that RaLMSpec improves over existing approaches by $1. 75$-$2. 39\times$, $1. 04$-$1. 39\times$, and $1. 31$-$1. 77\times$ when the retriever is an exact dense retriever, approximate dense retriever, and sparse retriever respectively. For token-level iterative RaLM (KNN-LM) serving, RaLMSpec is up to $7. 59\times$ and $2. 45\times$ faster than existing methods for exact dense and approximate dense retrievers, respectively.