Arrow Research search
Back to STOC

STOC 2025

Symmetric Perceptrons, Number Partitioning and Lattices

Conference Paper 11B Algorithms and Complexity · Theoretical Computer Science

Abstract

The symmetric binary perceptron ( SBP κ ) problem with parameter κ : ℝ ≥1 → [0,1] is an average-case search problem defined as follows: given a random Gaussian matrix A ∼ N (0,1) n × m as input where m ≥ n , output a vector x ∈ {−1,1} m such that || A x || ∞ ≤ κ( m / n ) · √ m . The number partitioning problem ( NPP κ ) corresponds to the special case of setting n =1. There is considerable evidence that both problems exhibit large computational-statistical gaps. In this work, we show (nearly) tight average-case hardness for these problems, assuming the worst-case hardness of standard approximate shortest vector problems on lattices. • For SBP κ , statistically, solutions exist with κ( x ) = 2 −Θ( x ) (Aubin, Perkins and Zdeborová, Journal of Physics 2019). For large n , the best that efficient algorithms have been able to achieve is a far cry from the statistical bound, namely κ( x ) = Θ(1/√ x ) (Bansal and Spencer, Random Structures and Algorithms 2020). The problem has been extensively studied in the TCS and statistics communities, and Gamarnik, Kızıldağ, Perkins and Xu (FOCS 2022) conjecture that Bansal-Spencer is tight: namely, κ( x ) = Θ(1/√ x ) is the optimal value achieved by computationally efficient algorithms. We prove their conjecture assuming the worst-case hardness of approximating the shortest vector problem on lattices. • For NPP κ , statistically, solutions exist with κ( m ) = Θ(2 − m ) (Karmarkar, Karp, Lueker and Odlyzko, Journal of Applied Probability 1986). Karmarkar and Karp’s classical differencing algorithm achieves κ( m ) = 2 − O (log 2 m ) . We prove that Karmarkar-Karp is nearly tight: namely, no polynomial-time algorithm can achieve κ( m ) = 2 −Ω(log 3 m ) , once again assuming the worst-case subexponential hardness of approximating the shortest vector problem on lattices to within a subexponential factor. Our hardness results are versatile, and hold with respect to different distributions of the matrix A (e.g., i.i.d. uniform entries from [0,1]) and weaker requirements on the solution vector x .

Authors

Keywords

  • Perceptron model
  • average-case complexity
  • lattice problems
  • worst-case to average-case reductions

Context

Venue
ACM Symposium on Theory of Computing
Archive span
1969-2025
Indexed papers
4364
Paper id
935952375681787771