Arrow Research search

Author name cluster

Huanyu Liu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2025 Conference Paper

Cross-Spectral Gaussian Splatting with Spatial Occupancy Consistency

  • Haipeng Guo
  • Huanyu Liu
  • Jiazheng Wen
  • Junbao Li

Using images captured by cameras with different light spectrum sensitivities, training a unified model for cross-spectral scene representation is challenging. Recent advances have shown the possibility of jointly optimizing cross-spectral relative poses and neural radiance fields using normalized cross-device coordinates. However, such method suffers from cross-spectral misalignment when collecting data asynchronously from devices and lacks the capability to render in real-time or handle large scenes. We address these issues by proposing cross-spectral Gaussian Splatting with spatial occupancy consistency, strictly aligns cross-spectral scene representation by sharing explicit Gaussian surfaces across spectra and separately optimizing each view's extrinsic using a matching-optimizing pose estimation method. Additionally, to address field-of-view differences in cross-spectral cameras, we improve the adaptive densify controller to fill non-overlapping areas. Comprehensive experiments demonstrate that SOC-GS achieves superior performance in novel view synthesis and real-time cross-spectral rendering.

NeurIPS Conference 2025 Conference Paper

Reasoning is Periodicity? Improving Large Language Models Through Effective Periodicity Modeling

  • Yihong Dong
  • Ge Li
  • Xue Jiang
  • Yongding Tao
  • Kechi Zhang
  • Lecheng Wang
  • Hao Zhu
  • Huanyu Liu

Periodicity, as one of the most important basic characteristics, lays the foundation for facilitating structured knowledge acquisition and systematic cognitive processes within human learning paradigms. However, the potential flaws of periodicity modeling in Transformer affect the learning efficiency and establishment of underlying principles from data for large language models (LLMs) built upon it. In this paper, we demonstrate that integrating effective periodicity modeling can improve the learning efficiency and performance of LLMs. We introduce FANformer, which adapts Fourier Analysis Network (FAN) into attention mechanism to achieve efficient periodicity modeling, by modifying the feature projection process of attention mechanism. Extensive experimental results on language modeling show that FANformer consistently outperforms Transformer when scaling up model size and training tokens, underscoring its superior learning efficiency. Our pretrained FANformer-1B exhibits marked improvements on downstream tasks compared to open-source LLMs with similar model parameters or training tokens. Moreover, we reveal that FANformer exhibits superior ability to learn and apply rules for reasoning compared to Transformer. The results position FANformer as an effective and promising architecture for advancing LLMs.

NeurIPS Conference 2025 Conference Paper

SATURN: SAT-based Reinforcement Learning to Unleash LLMs Reasoning

  • Huanyu Liu
  • Ge Li
  • Jia Li
  • Hao Zhu
  • Kechi Zhang
  • Yihong Dong

How to design reinforcement learning (RL) tasks that effectively unleash the reasoning capability of large language models (LLMs) remains an open question. Existing RL tasks (e. g. , math, programming, and constructing reasoning tasks) suffer from three key limitations: (1) Scalability. They rely heavily on human annotation or expensive LLM synthesis to generate sufficient training data. (2) Verifiability. LLMs' outputs are hard to verify automatically and reliably. (3) Controllable Difficulty. Most tasks lack fine-grained difficulty control, making it hard to train LLMs to develop reasoning ability from easy to hard. To address these limitations, we propose Saturn, a SAT-based RL framework that uses Boolean Satisfiability (SAT) problems to train and evaluate LLMs reasoning. Saturn enables scalable task construction, rule-based verification, and precise difficulty control. Saturn designs a curriculum learning pipeline that continuously improves LLMs' reasoning capability by constructing SAT tasks of increasing difficulty and training LLMs from easy to hard. To ensure stable training, we design a principled mechanism to control difficulty transitions. We introduce Saturn-2. 6k, a dataset of 2, 660 SAT problems with varying difficulty. It supports the evaluation of how LLM reasoning changes with problem difficulty. We apply Saturn to DeepSeek-R1-Distill-Qwen and obtain Saturn-1. 5B and Saturn-7B. We achieve several notable results: (1) On SAT problems, Saturn-1. 5B and Saturn-7B achieve average pass@3 improvements of +14. 0 and +28. 1, respectively. (2) On math and programming tasks, Saturn-1. 5B and Saturn-7B improve average scores by +4. 9 and +1. 8 on benchmarks (e. g. , AIME, LiveCodeBench). (3) Compared to the state-of-the-art (SOTA) approach in constructing RL tasks, Saturn achieves further improvements of +8. 8\%. We release the source code, data, and models to support future research.

ICRA Conference 2021 Conference Paper

VIC-Net: Voxelization Information Compensation Network for Point Cloud 3D Object Detection

  • Tianyuan Jiang
  • Nan Song
  • Huanyu Liu
  • Ruihao Yin
  • Ye Gong
  • Jian Yao 0002

Voxel-based methods have been widely used in point cloud 3D object detection. These methods usually transform points into voxels while suffering from information loss during point cloud voxelization. To address this problem, we propose a novel one-stage Voxelization Information Compensation Network (VIC-Net), which has the ability of loss-free feature extraction. The whole framework consists of a point branch for geometry detail extraction and a voxel branch for efficient proposals generation. Firstly, PointNet++ is adopted to efficiently encode geometry structure features from the raw point clouds. Then based on the encoded point features, two Point2Voxel (P2V) feature fusion modules are proposed to fuse point features with a voxel backbone, including Local P2V and Multi-Scale P2V. The P2V modules respectively integrate local detail features and multi-scale semantic contexts into a sparse voxel backbone. Thirdly, an auxiliary reconstruction loss is employed on the point branch to explicitly guide the point backbone to be aware of real geometry structures. In addition, we extend VIC-Net to a two-stage approach, namely VIC-RCNN, which further utilizes the fine geometry features to refine object locations. Experiments on the KITTI dataset demonstrate that our proposed VIC-Net outperforms other onestage methods and our two-stage method VIC-RCNN achieves new state-of-the-art performance.