Arrow Research search

Author name cluster

Jiangjun Peng

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

AAAI Conference 2026 Conference Paper

Fast Guaranteed Robust Local-Smooth Principal Component Separation

  • Mingdi Hu
  • Hailin Wang
  • Shuaijiang Li
  • Kexin Shi
  • Jiangjun Peng

Leveraging intrinsic data priors is critical for effective data recovery. However, existing approaches often struggle to achieve theoretical guarantees, strong performance, and computational efficiency simultaneously. In this paper, we introduce a novel Representative Coefficient Correlated Total Variation (RCCTV) regularizer that captures the recently observed low-rank and local smoothness properties of the representative coefficient tensor derived from a low-rank decomposition. RCCTV regularizer offers three key advantages: (1) it operates on a compact representative coefficient image significantly smaller than the original data, enabling highly efficient optimization; (2) it jointly enforces low-rankness and spatial smoothness through a single regularizer, eliminating the need for trade-off parameters; and (3) when integrated into a robust PCA framework (i.e., RCCTV-RPCA model), it admits provable exact recovery under mild conditions. To solve the resulting model, we develop an efficient ADMM-based algorithm accelerated via fast Fourier transform. Extensive experiments on both synthetic and real-world datasets demonstrate that the RCCTV-RPCA model achieves state-of-the-art accuracy while running significantly faster. Our code and Supplementary Material are available at https://github.com/mendy-2013/RCCTV.

IJCAI Conference 2025 Conference Paper

Beyond Low-rankness: Guaranteed Matrix Recovery via Modified Nuclear Norm

  • Jiangjun Peng
  • Yisi Luo
  • Xiangyong Cao
  • Shuang Xu
  • Deyu Meng

The nuclear norm (NN) has been widely explored in matrix recovery problems, such as Robust PCA and matrix completion, leveraging the inherent global low-rank structure of the data. In this study, we introduce a new modified nuclear norm (MNN) framework, where the MNN family norms are defined by adopting suitable transformations and performing the NN on the transformed matrix. The MNN framework offers two main advantages: (1) it jointly captures both local information and global low-rankness without requiring trade-off parameter tuning; (2) under mild assumptions on the transformation, we provide theoretical recovery guarantees for both Robust PCA and MC tasks—an achievement not shared by existing methods that combine local and global information. Thanks to its general and flexible design, MNN can accommodate various proven transformations, enabling a unified and effective approach to structured low-rank recovery. Extensive experiments demonstrate the effectiveness of our method. Code and supplementary material are available at https: //github. com/andrew-pengjj/modified_nuclear_norm.

IJCAI Conference 2025 Conference Paper

Fast Guaranteed Tensor Recovery with Adaptive Tensor Nuclear Norm

  • Jiangjun Peng
  • Hailin Wang
  • Xiangyong Cao
  • Shuang Xu

Real-world datasets like multi-spectral images and videos are naturally represented as tensors. However, limitations in data acquisition often lead to corrupted or incomplete tensor data, making tensor recovery a critical challenge. Solving this problem requires exploiting inherent structural patterns, with the low-rank property being particularly vital. An important category of existing low-rank tensor recovery methods relies on the tensor nuclear norms. However, these methods struggle with either computational inefficiency or weak theoretical guarantees for large-scale data. To address these issues, we propose a fast guaranteed tensor recovery framework based on a new tensor nuclear norm. Our approach adaptively extracts a column-orthogonal matrix from the data, reducing a large-scale tensor into a smaller subspace for efficient processing. This dimensionality reduction enhances speed without compromising accuracy. The recovery theories of two typical models are established by introducing an adjusted incoherence condition. Extensive experiments demonstrate the effectiveness of the proposed method, showing improved accuracy and speed over existing approaches. Our code and supplementary material are available at https: //github. com/andrew-pengjj/adaptive_tensor_nuclear_norm.

NeurIPS Conference 2023 Conference Paper

Preconditioning Matters: Fast Global Convergence of Non-convex Matrix Factorization via Scaled Gradient Descent

  • Xixi Jia
  • Hailin Wang
  • Jiangjun Peng
  • Xiangchu Feng
  • Deyu Meng

Low-rank matrix factorization (LRMF) is a canonical problem in non-convex optimization, the objective function to be minimized is non-convex and even non-smooth, which makes the global convergence guarantee of gradient-based algorithm quite challenging. Recent work made a breakthrough on proving that standard gradient descent converges to the $\varepsilon$-global minima after $O( \frac{d \kappa^2}{\tau^2} {\rm ln} \frac{d \sigma_d}{\tau} + \frac{d \kappa^2}{\tau^2} {\rm ln} \frac{\sigma_d}{\varepsilon})$ iterations from small initialization with a very small learning rate (both are related to the small constant $\tau$). While the dependence of the convergence on the \textit{condition number} $\kappa$ and \textit{small learning rate} makes it not practical especially for ill-conditioned LRMF problem. In this paper, we show that precondition helps in accelerating the convergence and prove that the scaled gradient descent (ScaledGD) and its variant, alternating scaled gradient descent (AltScaledGD) converge to an $\varepsilon$-global minima after $O( {\rm ln} \frac{d}{\delta} + {\rm ln} \frac{d}{\varepsilon})$ iterations from general random initialization. Meanwhile, for small initialization as in gradient descent, both ScaledGD and AltScaledGD converge to $\varepsilon$-global minima after only $O({\rm ln} \frac{d}{\varepsilon})$ iterations. Furthermore, we prove that as a proximity to the alternating minimization, AltScaledGD converges faster than ScaledGD, its global convergence does not rely on small learning rate and small initialization, which certificates the advantages of AltScaledGD in LRMF.

AAAI Conference 2023 Conference Paper

Tensor Compressive Sensing Fused Low-Rankness and Local-Smoothness

  • Xinling Liu
  • Jingyao Hou
  • Jiangjun Peng
  • Hailin Wang
  • Deyu Meng
  • Jianjun Wang

A plethora of previous studies indicates that making full use of multifarious intrinsic properties of primordial data is a valid pathway to recover original images from their degraded observations. Typically, both low-rankness and local-smoothness broadly exist in real-world tensor data such as hyperspectral images and videos. Modeling based on both properties has received a great deal of attention, whereas most studies concentrate on experimental performance, and theoretical investigations are still lacking. In this paper, we study the tensor compressive sensing problem based on the tensor correlated total variation, which is a new regularizer used to simultaneously capture both properties existing in the same dataset. The new regularizer has the outstanding advantage of not using a trade-off parameter to balance the two properties. The obtained theories provide a robust recovery guarantee, where the error bound shows that our model certainly benefits from both properties in ground-truth data adaptively. Moreover, based on the ADMM update procedure, we design an algorithm with a global convergence guarantee to solve this model. At last, we carry out experiments to apply our model to hyperspectral image and video restoration problems. The experimental results show that our method is prominently better than many other competing ones. Our code and Supplementary Material are available at https://github.com/fsliuxl/cs-tctv.