Arrow Research search

Author name cluster

Ye Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
1 author row

Possible papers

6

AAAI Conference 2025 Conference Paper

InstantSticker: Realistic Decal Blending via Disentangled Object Reconstruction

  • Yi Zhang
  • Xiaoyang Huang
  • Yishun Dou
  • Yue Shi
  • Rui Shi
  • Ye Chen
  • Bingbing Ni
  • Wenjun Zhang

We present InstantSticker, a disentangled reconstruction pipeline based on Image-Based Lighting (IBL), which focuses on highly realistic decal blending, simulates stickers attached to the reconstructed surface, and allows for instant editing and real-time rendering. To achieve stereoscopic impression of the decal, we introduce shadow factor into IBL, which can be adaptively optimized during training. This allows the shadow brightness of surfaces to be accurately decomposed rather than baked into the diffuse color, ensuring that the edited texture exhibits authentic shading. To address the issues of warping and blurriness in previous methods, we apply As-Rigid-As-Possible (ARAP) parameterization to pre-unfold a specified area of the mesh and use the local UV mapping combined with a neural texture map to enhance the ability to express high-frequency details in that area. For instant editing, we utilize the Disney BRDF model, explicitly defining material colors with 3-channel diffuse albedo. This enables instant replacement of albedo RGB values during the editing process, avoiding the prolonged optimization required in previous approaches. In our experiment, we introduce the Ratio Variance Warping (RVW) metric to evaluate the local geometric warping of the decal area. Extensive experimental results demonstrate that our method surpasses previous decal blending methods in terms of editing quality, editing speed and rendering speed, achieving the state-of-the-art.

AAAI Conference 2023 Conference Paper

Fast Fluid Simulation via Dynamic Multi-Scale Gridding

  • Jinxian Liu
  • Ye Chen
  • Bingbing Ni
  • Wei Ren
  • Zhenbo Yu
  • Xiaoyang Huang

Recent works on learning-based frameworks for Lagrangian (i.e., particle-based) fluid simulation, though bypassing iterative pressure projection via efficient convolution operators, are still time-consuming due to excessive amount of particles. To address this challenge, we propose a dynamic multi-scale gridding method to reduce the magnitude of elements that have to be processed, by observing repeated particle motion patterns within certain consistent regions. Specifically, we hierarchically generate multi-scale micelles in Euclidean space by grouping particles that share similar motion patterns/characteristics based on super-light motion and scale estimation modules. With little internal motion variation, each micelle is modeled as a single rigid body with convolution only applied to a single representative particle. In addition, a distance-based interpolation is conducted to propagate relative motion message among micelles. With our efficient design, the network produces high visual fidelity fluid simulations with the inference time to be only 4.24 ms/frame (with 6K fluid particles), hence enables real-time human-computer interaction and animation. Experimental results on multiple datasets show that our work achieves great simulation acceleration with negligible prediction error increase.

NeurIPS Conference 2023 Conference Paper

Stein $\Pi$-Importance Sampling

  • Congye Wang
  • Ye Chen
  • Heishiro Kanagawa
  • Chris J. Oates

Stein discrepancies have emerged as a powerful tool for retrospective improvement of Markov chain Monte Carlo output. However, the question of how to design Markov chains that are well-suited to such post-processing has yet to be addressed. This paper studies Stein importance sampling, in which weights are assigned to the states visited by a $\Pi$-invariant Markov chain to obtain a consistent approximation of $P$, the intended target. Surprisingly, the optimal choice of $\Pi$ is not identical to the target $P$; we therefore propose an explicit construction for $\Pi$ based on a novel variational argument. Explicit conditions for convergence of Stein $\Pi$-Importance Sampling are established. For $\approx 70$% of tasks in the PosteriorDB benchmark, a significant improvement over the analogous post-processing of $P$-invariant Markov chains is reported.

JBHI Journal 2013 Journal Article

Automatic Tracking of Aponeuroses and Estimation of Muscle Thickness in Ultrasonography: A Feasibility Study

  • Shan Ling
  • Yongjin Zhou
  • Ye Chen
  • Yu-Qian Zhao
  • Lei Wang
  • Yong-Ping Zheng

Muscle thickness measurement in ultrasonography was traditionally conducted by a trained operator, and the manual detecting process is time consuming and subjective. In this paper, we proposed an automatic tracking strategy to achieve the continuous and quantitative measurement for gastrocnemius muscle thickness in ultrasound images. The method involved three steps: tracking of seed points, contours extraction of aponeuroses, and muscle thickness estimation. In an ultrasound image sequence, we first selected two seed points in the first frame manually for the superficial and deep aponeuroses, respectively. Seed points in all following frames were then tracked by registering to their respective previous frames. Second, we adopted the local and global intensity fitting model to extract the contours of aponeuroses. At last, the muscle thickness was achieved by calculating the distance between the contours of superficial and deep aponeuroses. The performance of the algorithm was evaluated using 500 frames of ultrasound images. It was demonstrated in the experiments that the proposed methods could be used for objective tracking of aponeuroses and estimation of muscle thickness in musculoskeletal ultrasound images.

NeurIPS Conference 2009 Conference Paper

Factor Modeling for Advertisement Targeting

  • Ye Chen
  • Michael Kapralov
  • John Canny
  • Dmitry Pavlov

We adapt a probabilistic latent variable model, namely GaP (Gamma-Poisson), to ad targeting in the contexts of sponsored search (SS) and behaviorally targeted (BT) display advertising. We also approach the important problem of ad positional bias by formulating a one-latent-dimension GaP factorization. Learning from click-through data is intrinsically large scale, even more so for ads. We scale up the algorithm to terabytes of real-world SS and BT data that contains hundreds of millions of users and hundreds of thousands of features, by leveraging the scalability characteristics of the algorithm and the inherent structure of the problem including data sparsity and locality. Specifically, we demonstrate two somewhat orthogonal philosophies of scaling algorithms to large-scale problems, through the SS and BT implementations, respectively. Finally, we report the experimental results using Yahoos vast datasets, and show that our approach substantially outperform the state-of-the-art methods in prediction accuracy. For BT in particular, the ROC area achieved by GaP is exceeding 0. 95, while one prior approach using Poisson regression yielded 0. 83. For computational performance, we compare a single-node sparse implementation with a parallel implementation using Hadoop MapReduce, the results are counterintuitive yet quite interesting. We therefore provide insights into the underlying principles of large-scale learning.