Arrow Research search

Author name cluster

Yuting Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

JBHI Journal 2026 Journal Article

TFDF: Self-Supervised Time-Frequency Dynamic Fusion with Dual Constraints for Atrial Fibrillation Detection

  • Yunfan Chen
  • Sizhen Li
  • Xiangkui Wan
  • Yuting Li

Atrial fibrillation (AF) is the most common paroxysmal cardiac arrhythmia, requiring continuous wearable electrocardiogram (ECG) monitoring for early detection. Supervised AF detection methods rely on extensive annotated ECG data, which is a costly barrier for real-world applications. Self-supervised learning (SSL) leverages unlabeled data for representation learning. However, existing SSL methods often fail to effectively model the cross-domain structural dependencies between temporal and spectral characteristics of AF under label-free conditions. To address this challenge, we propose a self-supervised Time-Frequency Dynamic Fusion (TFDF) with dual constraints for label-efficient AF detection. The TFDF takes temporal RR interval rhythm features as stable guidance and integrates multi-scale spectral representations of the RR and P-wave bands. A directional consistency constraint is introduced as the core objective to achieve adaptive cross-domain feature fusion, ensuring coherent latent representations between temporal and spectral modalities. Meanwhile, a cluster-guided constraint is designed to provide soft structural priors, stabilizing feature alignment during unsupervised pretraining. The TFDF was pretrained on the MIT-BIH AF Database and fine-tuned on the CPSC2018 dataset. When evaluated on the Chapman-Shaoxing 12-lead ECG dataset, TFDF achieved an average F1-score of approximately 0. 920 and AUC of approximately 0. 979, outperforming state-of-the-art SSL baselines. These results demonstrate that TFDF provides a generalizable and label-efficient solution for ECG-based AF detection.

NeurIPS Conference 2025 Conference Paper

First SFT, Second RL, Third UPT: Continual Improving Multi-Modal LLM Reasoning via Unsupervised Post-Training

  • Lai Wei
  • Yuting Li
  • Chen Wang
  • Yue Wang
  • Linghe Kong
  • Weiran Huang
  • Lichao Sun

Improving Multi-modal Large Language Models (MLLMs) in the post-training stage typically relies on supervised fine-tuning (SFT) or reinforcement learning (RL), which require expensive and manually annotated multi-modal data--an ultimately unsustainable resource. This limitation has motivated a growing interest in unsupervised paradigms as a third stage of post-training after SFT and RL. While recent efforts have explored this direction, their methods are complex and difficult to iterate. To address this, we propose MM-UPT, a simple yet effective framework for unsupervised post-training of MLLMs, enabling continual self-improvement without any external supervision. The training method of MM-UPT builds upon GRPO, replacing traditional reward signals with a self-rewarding mechanism based on majority voting over multiple sampled responses. Our experiments demonstrate that such training method effectively improves the reasoning ability of Qwen2. 5-VL-7B (e. g. , 66. 3\%$\rightarrow$72. 9\% on MathVista, 62. 9\%$\rightarrow$68. 7\% on We-Math), using standard dataset without ground truth labels. To further explore scalability, we extend our framework to a data self-generation setting, designing two strategies that prompt the MLLM to synthesize new training samples on its own. Additional experiments show that combining these synthetic data with the unsupervised training method can also boost performance, highlighting a promising approach for scalable self-improvement. Overall, MM-UPT offers a new paradigm for autonomous enhancement of MLLMs, serving as a critical third step after initial SFT and RL in the absence of external supervision. Our code is available at \url{https: //github. com/waltonfuture/MM-UPT}.