Arrow Research search

Author name cluster

Zaoming Yan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

NeurIPS Conference 2025 Conference Paper

Surface-Aware Feed-Forward Quadratic Gaussian for Frame Interpolation with Large Motion

  • Zaoming Yan
  • Yaomin Huang
  • Pengcheng Lei
  • Qizhou Chen
  • Guixu Zhang
  • Faming Fang

Motion in the real world takes place in 3D space. Existing Frame Interpolation methods often estimate global receptive fields in 2D frame space. Due to the limitations of 2D space, these global receptive fields are limited, which makes it difficult to match object correspondences between frames, resulting in sub-optimal performance when handling large-motion scenarios. In this paper, we introduce a novel pipeline for exploring object correspondences based on differential surface theory. The differential surface coordinate system provides a better representation of the real world, enabling effective exploration of object correspondences. Specifically, the pipeline first transforms an input pair of video frames from the image coordinate system to the differential surface coordinate system. Subsequently, within this coordinate system, object correspondences are explored based on surface geometric properties and the surface uniqueness theorem. Experimental findings showcase that our method attains state-of-the-art performance across large motion benchmarks. Our method demonstrates the state-of-the-art performance on these VFI subsets with large motion.

NeurIPS Conference 2025 Conference Paper

UniEdit: A Unified Knowledge Editing Benchmark for Large Language Models

  • Qizhou Chen
  • Dakan Wang
  • Taolin Zhang
  • Zaoming Yan
  • Chengsong You
  • Chengyu Wang
  • Xiaofeng He

Model editing aims to efficiently revise incorrect or outdated knowledge within LLMs without incurring the high cost of full retraining and risking catastrophic forgetting. Currently, most LLM editing datasets are confined to narrow knowledge domains and cover a limited range of editing evaluation. They often overlook the broad scope of editing demands and the diversity of ripple effects resulting from edits. In this context, we introduce \uniedit, a unified benchmark for LLM editing grounded in open-domain knowledge. First, we construct editing samples by selecting entities from 25 common domains across five major categories, utilizing the extensive triple knowledge available in open-domain knowledge graphs to ensure comprehensive coverage of the knowledge domains. To address the issues of generality and locality in editing, we design an Neighborhood Multi-hop Chain Sampling (NMCS) algorithm to sample subgraphs based on a given knowledge piece to entail comprehensive ripple effects to evaluate. Finally, we employ proprietary LLMs to convert the sampled knowledge subgraphs into natural language text, guaranteeing grammatical accuracy and syntactical diversity. Extensive statistical analysis confirms the scale, comprehensiveness, and diversity of our \uniedit benchmark. We conduct comprehensive experiments across multiple LLMs and editors, analyzing their performance to highlight strengths and weaknesses in editing across open knowledge domains and various evaluation criteria, thereby offering valuable insights for future research endeavors.