Arrow Research search

Author name cluster

Yingli Tian

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2026 Conference Paper

SepPrune: Structured Pruning for Efficient Deep Speech Separation

  • Yuqi Li
  • Kai Li
  • Xin Yin
  • Zhifei Yang
  • Zeyu Dong
  • Zhengtao Yao
  • Haoyan Xu
  • Yingli Tian

Although deep learning has substantially advanced speech separation in recent years, most existing studies continue to prioritize separation quality while overlooking computational efficiency, an essential factor for low-latency speech processing in real-time applications. In this paper, we propose SepPrune, the first structured pruning framework specifically designed to compress deep speech separation models and reduce their computational cost. SepPrune begins by analyzing the computational structure of a given model to identify layers with the highest computational burden. It then introduces a differentiable masking strategy to enable gradient-driven channel selection. Based on the learned masks, SepPrune prunes redundant channels and fine-tunes the remaining parameters to recover performance. Extensive experiments demonstrate that this learnable pruning paradigm yields substantial advantages for channel pruning in speech separation models, outperforming existing methods. Notably, a model pruned with SepPrune can recover 85% of the performance of a pre-trained model (trained over hundreds of epochs) with only one epoch of fine-tuning, and achieves convergence 36x faster than training from scratch.

NeurIPS Conference 2019 Conference Paper

Incremental Scene Synthesis

  • Benjamin Planche
  • Xuejian Rong
  • Ziyan Wu
  • Srikrishna Karanam
  • Harald Kosch
  • Yingli Tian
  • Jan Ernst
  • ANDREAS HUTTER

We present a method to incrementally generate complete 2D or 3D scenes with the following properties: (a) it is globally consistent at each step according to a learned scene prior, (b) real observations of a scene can be incorporated while observing global consistency, (c) unobserved regions can be hallucinated locally in consistence with previous observations, hallucinations and global priors, and (d) hallucinations are statistical in nature, i. e. , different scenes can be generated from the same observations. To achieve this, we model the virtual scene, where an active agent at each step can either perceive an observed part of the scene or generate a local hallucination. The latter can be interpreted as the agent's expectation at this step through the scene and can be applied to autonomous navigation. In the limit of observing real data at each point, our method converges to solving the SLAM problem. It can otherwise sample entirely imagined scenes from prior distributions. Besides autonomous agents, applications include problems where large data is required for building robust real-world applications, but few samples are available. We demonstrate efficacy on various 2D as well as 3D data.

IJCAI Conference 2016 Conference Paper

Demo: Assisting Visually Impaired People Navigate Indoors

  • J. Pablo Mu
  • ntilde; oz
  • Bing Li
  • Xuejian Rong
  • Jizhong Xiao
  • Yingli Tian
  • Aries Arditi

Research in Artificial Intelligence, Robotics and Computer Vision has recently made great strides in improving indoor localization. Publicly available technology now allows for indoor localization with very small margins of error. In this demo, we show a system that uses state-of the-art technology to as- sist visually impaired people navigate indoors. Our system takes advantage of spatial representations from CAD files, or floor plan images, to extract valuable information that later can be used to im- prove navigation and human-computer interaction. Using depth information, our system is capable of detecting obstacles and guiding the user to avoid them.