Arrow Research search

Author name cluster

Li Lin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

AAAI Conference 2026 Conference Paper

ORTCL: Towards Continual Learning of Time Series Foundation Models on Streaming Data via Orthogonal Rotation

  • Li Lin
  • Xinrui Zhang
  • Qi Zhang
  • Shuai Wang
  • Kaiwen Xia

Time Series Foundation Models (TSFMs) have emerged as a promising approach in time series analysis. Due to the large-scale parameters of TSFMs and pretraining cost, how to adapt TDFMs in streaming data is always the key factor constraining their application effectiveness. Because streaming data often experiences data distribution and task drifts, which cannot be learnt by offline training. Existing methods typically address streaming data modeling with continuous learning through model fine-tuning or model editing. However, fine-tuning incurs significant computational costs, while editing methods can lead to shifts in the original feature space during streaming updates. To address these limitations, we propose a novel Orthogonal Rotation Transformation-based Continuous Learning method, called ORTCL, for TSFMs. Our key insight is to apply orthogonal matrix rotations to the input and output feature spaces of the TSFMs during model editing. This preserves the metric structure of the original feature space and enables new data to be directly mapped into the existing feature space of the TSFMs. Specifically, we obtain the orthogonal matrix for the input layer via singular value decomposition and derive the corresponding transformation matrix for the output layer through least squares optimization. Extensive experimental results demonstrate that ORTCL outperforms existing methods in both single-domain and cross-domain streaming time series forecasting tasks, effectively mitigating catastrophic forgetting.

AAAI Conference 2025 Conference Paper

Improving Generalization for AI-Synthesized Voice Detection

  • Hainan Ren
  • Li Lin
  • Chun-Hao Liu
  • Xin Wang
  • Shu Hu

AI-synthesized voice technology has the potential to create realistic human voices for beneficial applications, but it can also be misused for malicious purposes. While existing AI-synthesized voice detection models excel in intra-domain evaluation, they face challenges in generalizing across different domains, potentially becoming obsolete as new voice generators emerge. Current solutions use diverse data and advanced machine learning techniques (e.g., domain-invariant representation, self-supervised learning), but are limited by predefined vocoders and sensitivity to factors like background noise and speaker identity. In this work, we introduce an innovative disentanglement framework aimed at extracting domain-agnostic artifact features related to vocoders. Utilizing these features, we enhance model learning in a flat loss landscape, enabling escape from suboptimal solutions and improving generalization. Extensive experiments on benchmarks show our approach outperforms state-of-the-art methods, achieving up to 5.12% improvement in the equal error rate metric in intra-domain and 7.59% in cross-domain evaluations.

JBHI Journal 2025 Journal Article

LF-SynthSeg: Label-Free Brain Tissue-Assisted Tumor Synthesis and Segmentation

  • Pengxiao Xu
  • Junyan Lyu
  • Li Lin
  • Pujin Cheng
  • Xiaoying Tang

Unsupervised brain tumor segmentation is pivotal in realms of disease diagnosis, surgical planning, and treatment response monitoring, with the distinct advantage of obviating the need for labeled data. Traditional methodologies in this domain, however, often fall short in fully capitalizing on the extensive prior knowledge of brain tissue, typically approaching the task merely as an anomaly detection challenge. In our research, we present an innovative strategy that effectively integrates brain tissues' prior knowledge into both the synthesis and segmentation of brain tumor from T2-weighted Magnetic Resonance Imaging scans. Central to our method is the tumor synthesis mechanism, employing randomly generated ellipsoids in conjunction with the intensity profiles of brain tissues. This methodology not only fosters a significant degree of variation in the tumor presentations within the synthesized images but also facilitates the creation of an essentially unlimited pool of abnormal T2-weighted images. These synthetic images closely replicate the characteristics of real tumor-bearing scans. Our training protocol extends beyond mere tumor segmentation; it also encompasses the segmentation of brain tissues, thereby directing the network's attention to the boundary relationship between brain tumor and brain tissue, thus improving the robustness of our method. We evaluate our approach across five widely recognized public datasets (BRATS 2019, BRATS 2020, BRATS 2021, PED and SSA), and the results show that our method outperforms state-of-the-art unsupervised tumor segmentation methods by large margins. Moreover, the proposed method achieves more than 92 ${\%}$ of the fully supervised performance on the same testing datasets.

ICML Conference 2025 Conference Paper

Preserving AUC Fairness in Learning with Noisy Protected Groups

  • Mingyang Wu
  • Li Lin
  • Wenbin Zhang 0002
  • Xin Wang 0045
  • Zhenhuan Yang
  • Shu Hu 0001

The Area Under the ROC Curve (AUC) is a key metric for classification, especially under class imbalance, with growing research focus on optimizing AUC over accuracy in applications like medical image analysis and deepfake detection. This leads to fairness in AUC optimization becoming crucial as biases can impact protected groups. While various fairness mitigation techniques exist, fairness considerations in AUC optimization remain in their early stages, with most research focusing on improving AUC fairness under the assumption of clean protected groups. However, these studies often overlook the impact of noisy protected groups, leading to fairness violations in practice. To address this, we propose the first robust AUC fairness approach under noisy protected groups with fairness theoretical guarantees using distributionally robust optimization. Extensive experiments on tabular and image datasets show that our method outperforms state-of-the-art approaches in preserving AUC fairness. The code is in https: //github. com/Purdue-M2/AUC_Fairness_with_Noisy_Groups.

JBHI Journal 2025 Journal Article

ProCNS: Progressive Prototype Calibration and Noise Suppression for Weakly-Supervised Medical Image Segmentation

  • Yixiang Liu
  • Li Lin
  • Kenneth K. Y. Wong
  • Xiaoying Tang

Weakly-supervised segmentation (WSS) has emerged as a solution to mitigate the conflict between annotation cost and model performance by adopting sparse annotation formats (e. g. , point, scribble, block, etc.). Typical approaches attempt to exploit anatomy and topology priors to directly expand sparse annotations into pseudo-labels. However, due to lack of attention to the ambiguous boundaries in medical images and insufficient exploration of sparse supervision, existing approaches tend to generate erroneous and overconfident pseudo proposals in noisy regions, leading to cumulative model error and performance degradation. In this work, we propose a novel WSS approach, named ProCNS, encompassing two synergistic modules devised with the principles of progressive prototype calibration and noise suppression. Specifically, we design a Prototype-based Regional Spatial Affinity (PRSA) loss to maximize the pair-wise affinities between spatial and semantic elements, providing our model of interest with more reliable guidance. The affinities are derived from the input images and the prototype-refined predictions. Meanwhile, we propose an Adaptive Noise Perception and Masking (ANPM) module to obtain more enriched and representative prototype representations, which adaptively identifies and masks noisy regions within the pseudo proposals, reducing potential erroneous interference during prototype computation. Furthermore, we generate specialized soft pseudo-labels for the noisy regions identified by ANPM, providing supplementary supervision. Extensive experiments on six medical image segmentation tasks involving different modalities demonstrate that the proposed framework significantly outperforms representative state-of-the-art methods.

AAAI Conference 2020 Conference Paper

Solving Sequential Text Classification as Board-Game Playing

  • Chen Qian
  • Fuli Feng
  • Lijie Wen
  • Zhenpeng Chen
  • Li Lin
  • Yanan Zheng
  • Tat-Seng Chua

Sequential Text Classification (STC) aims to classify a sequence of text fragments (e. g. , words in a sentence or sentences in a document) into a sequence of labels. In addition to the intra-fragment text contents, considering the interfragment context dependencies is also important for STC. Previous sequence labeling approaches largely generate a sequence of labels in left-to-right reading order. However, the need for context information in making decisions varies across different fragments and is not strictly organized in a left-to-right order. Therefore, it is appealing to label the fragments that need less consideration of context information first before labeling the fragments that need more. In this paper, we propose a novel model that labels a sequence of fragments in jumping order. Specifically, we devise a dedicated boardgame to develop a correspondence between solving STC and board-game playing. By defining proper game rules and devising a game state evaluator in which context clues are injected, at each round, each player is effectively pushed to find the optimal move without position restrictions via considering the current game state, which corresponds to producing a label for an unlabeled fragment jumpily with the consideration of the contexts clues. The final game-end state is viewed as the optimal label sequence. Extensive results on three representative datasets show that the proposed approach outperforms the state-of-the-art methods with statistical significance.