Arrow Research search

Author name cluster

Pingping Liu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

AAAI Conference 2026 Conference Paper

Beyond Illumination: Fine-Grained Detail Preservation in Extreme Dark Image Restoration

  • Tongshun Zhang
  • Pingping Liu
  • Zixuan Zhong
  • Zijian Zhang
  • Qiuzhan Zhou

Recovering fine-grained details in extremely dark images remains challenging due to severe structural information loss and noise corruption. Existing enhancement methods often fail to preserve intricate details and sharp edges, limiting their effectiveness in downstream applications like text and edge detection. To address these deficiencies, we propose an efficient dual-stage approach centered on detail recovery for dark images. In the first stage, we introduce a Residual Fourier-Guided Module (RFGM) that effectively restores global illumination in the frequency domain. RFGM captures inter-stage and inter-channel dependencies through residual connections, providing robust priors for high-fidelity frequency processing while mitigating error accumulation risks from unreliable priors. The second stage employs complementary Mamba modules specifically designed for textural structure refinement: (1) Patch Mamba operates on channel-concatenated non-downsampled patches, meticulously modeling pixel-level correlations to enhance fine-grained details without resolution loss. (2) Grad Mamba explicitly focuses on high-gradient regions, alleviating state decay in state space models and prioritizing reconstruction of sharp edges and boundaries. Extensive experiments on multiple benchmark datasets and downstream applications demonstrate that our method significantly improves detail recovery performance while maintaining efficiency. Crucially, the proposed modules are lightweight and can be seamlessly integrated into existing Fourier-based frameworks with minimal computational overhead.

AAAI Conference 2026 Conference Paper

SPJFNet: Self-Mining Prior-Guided Joint Frequency Enhancement for Ultra-Efficient Dark Image Restoration

  • Tongshun Zhang
  • Pingping Liu
  • Zijian Zhang
  • Qiuzhan Zhou

Current dark image restoration methods suffer from severe efficiency bottlenecks, primarily stemming from: computational burden and error correction costs associated with reliance on external priors (manual or cross-modal); redundant operations in complex multi-stage enhancement pipelines; and indiscriminate processing across frequency components in frequency-domain methods, leading to excessive global computational demands. To address these challenges, we propose an Efficient Self-Mining Prior-Guided Joint Frequency Enhancement Network (SPJFNet). Specifically, we first introduce a Self-Mining Guidance Module (SMGM) that generates lightweight endogenous guidance directly from the network, eliminating dependence on external priors and thereby bypassing error correction overhead while improving inference speed. Second, through meticulous analysis of different frequency domain characteristics, we reconstruct and compress multi-level operation chains into a single efficient operation via lossless wavelet decomposition and joint Fourier-based advantageous frequency enhancement, significantly reducing parameters. Building upon this foundation, we propose a Dual-Frequency Guidance Framework (DFGF) that strategically deploys specialized high/low frequency branches (wavelet-domain high-frequency enhancement and Fourier-domain low-frequency restoration), decoupling frequency processing to substantially reduce computational complexity. Rigorous evaluation across multiple benchmarks demonstrates that SPJFNet not only surpasses state-of-the-art performance but also achieves significant efficiency improvements, substantially reducing model complexity and computational overhead.

IROS Conference 2025 Conference Paper

Complex Robotic Manipulation via Hindsight Goal Diffusion and Graph-based Experience Replay

  • Zihao Sun
  • Zihan Li
  • Jinrui He
  • Yong Song 0005
  • Pingping Liu
  • Qingyang Xu
  • Xianfeng Yuan
  • Rui Song 0002

Goal-conditioned reinforcement learning (GCRL) is an effective method for multi-goal robotic manipulation tasks. Many studies based on hindsight experience replay (HER) and hindsight goal generation (HGG) have achieved the autonomous acquisition of robotic manipulation in reward-sparse environments and have greatly improved the learning efficiency of GCRL. However, these methods perform poorly in environments with obstacles and distant goals. In this paper, we propose hindsight goal diffusion and graph-based experience replay (HGD-GER) for complex robotic manipulation. First, obstacle-avoiding graphs in environments with obstacles are constructed, and the graph-based distance metric between different goals is established. Second, the proposed HGD approach utilizes the inherent denoising mechanism of diffusion models and obstacle-avoiding graph-based distance to generate exploration goals, thereby promoting the exploration of obstacle-bypassing areas. Then, GER module modifies the reward value of experience replay by graph-based distance, thereby avoiding the bias introduced by HER and improving the learning performance of the RL algorithm under sparse reward conditions. Finally, we conducted experiments on three robotic manipulation tasks with obstacles and distant goals, and the results show that the proposed HGD-GER achieves excellent learning performance. Additionally, the proposed method is deployed on the physical robot.