Arrow Research search

Author name cluster

Jingyu Lin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2026 Conference Paper

FDP: A Frequency-Decomposition Preprocessing Pipeline for Unsupervised Anomaly Detection in Brain MRI

  • Hao Li
  • Zhenfeng Zhuang
  • Jingyu Lin
  • Yu Liu
  • Yifei Chen
  • Qiong Peng
  • Lequan Yu
  • Liansheng Wang

Due to the diversity of brain anatomy and the scarcity of annotated data, supervised anomaly detection for brain MRI remains challenging, driving the development of unsupervised anomaly detection (UAD) approaches. Current UAD methods typically utilize synthetically generated noise perturbations on healthy MRIs to train generative models for normal anatomy reconstruction, enabling anomaly detection via residual maps. However, such simulated anomalies lack the biophysical fidelity and morphological complexity characteristic of true clinical lesions. To advance UAD in brain MRI, we conduct the first systematic frequency-domain analysis of pathological signatures, revealing two key properties: (1) anomalies exhibit unique frequency patterns distinguishable from normal anatomy, and (2) low-frequency signals maintain consistent representations across healthy scans. These insights motivate our Frequency-Decomposition Preprocessing (FDP) framework—the first UAD method to leverage frequency-domain reconstruction for simultaneous pathology suppression and anatomical preservation. FDP can integrate seamlessly with existing anomaly simulation techniques, consistently enhancing detection performance across diverse architectures while maintaining diagnostic fidelity. Experimental results demonstrate that FDP consistently improves anomaly detection performance when integrated with existing methods. Notably, FDP achieves a 17.63% increase in DICE score with LDM while maintaining robust improvements across multiple baselines.

AAAI Conference 2026 Conference Paper

IdentityStory: Taming Your Identity-Preserving Generator for Human-Centric Story Generation

  • Donghao Zhou
  • Jingyu Lin
  • Guibao Shen
  • Quande Liu
  • Jialin Gao
  • Lihao Liu
  • Lan Du
  • Cunjian Chen

Recent visual generative models enable story generation with consistent characters from text, but human-centric story generation faces additional challenges, such as maintaining detailed and diverse human face consistency and coordinating multiple characters across different images. This paper presents IdentityStory, a framework for human-centric story generation that ensures consistent character identity across multiple sequential images. By taming identity-preserving generators, the framework features two key components: Iterative Identity Discovery, which extracts cohesive character identities, and Re-denoising Identity Injection, which re-denoises images to inject identities while preserving desired context. Experiments on the ConsiStory-Human benchmark demonstrate that IdentityStory outperforms existing methods, particularly in face consistency, and supports multi-character combinations. The framework also shows strong potential for applications such as infinite-length story generation and dynamic character composition.

NeurIPS Conference 2025 Conference Paper

SceneDecorator: Towards Scene-Oriented Story Generation with Scene Planning and Scene Consistency

  • Quanjian Song
  • Donghao Zhou
  • Jingyu Lin
  • Fei Shen
  • Jiaze Wang
  • Xiaowei Hu
  • Cunjian Chen
  • Pheng-Ann Heng

Recent text-to-image models have revolutionized image generation, but they still struggle with maintaining concept consistency across generated images. While existing works focus on character consistency, they often overlook the crucial role of scenes in storytelling, which restricts their creativity in practice. This paper introduces scene-oriented story generation, addressing two key challenges: (i) scene planning, where current methods fail to ensure scene-level narrative coherence by relying solely on text descriptions, and (ii) scene consistency, which remains largely unexplored in terms of maintaining scene consistency across multiple stories. We propose SceneDecorator, a training-free framework that employs VLM-Guided Scene Planning to ensure narrative coherence across different scenes in a ``global-to-local'' manner, and Long-Term Scene-Sharing Attention to maintain long-term scene consistency and subject diversity across generated stories. Extensive experiments demonstrate the superior performance of SceneDecorator, highlighting its potential to unleash creativity in the fields of arts, films, and games.

IROS Conference 2004 Conference Paper

Direct adaptive control using dyadic networks

  • Jingyu Lin
  • Zengqi Sun

A new type of wavelet based linearly parameterized network, dyadic network, is proposed in this paper, with application to an inverse dynamic based adaptive control. The function approximation capability of dyadic networks is supported by dyadic wavelet theory. Dyadic networks are easy to construct and the time cost is limited. Simulation results of a flight control system are presented to illustrate the performance of dyadic networks.