Arrow Research search
Back to JMLR

JMLR 2025

Simplex Constrained Sparse Optimization via Tail Screening

Journal Article Articles Artificial Intelligence ยท Machine Learning

Abstract

We consider the probabilistic simplex-constrained sparse recovery problem. The commonly used Lasso-type penalty for promoting sparsity is ineffective in this context since it is a constant within the simplex. Despite this challenge, fortunately, simplex constraint itself brings a self-regularization property, i.e., the empirical risk minimizer without any sparsity-promoting procedure obtains the usual Lasso-type estimation error. Moreover, we analyze the iterates of a projected gradient descent method and show its convergence to the ground truth sparse solution in the geometric rate until a satisfied statistical precision is attained. Although the estimation error is statistically optimal, the resulting solution is usually more dense than the sparse ground truth. To further sparsify the iterates, we propose a method called PERMITS via embedding a tail screening procedure, i.e., identifying negligible components and discarding them during iterations, into the projected gradient descent method. Furthermore, we combine tail screening and the special information criterion to balance the trade-off between fitness and complexity. Theoretically, the proposed PERMITS method can exactly recover the ground truth support set under mild conditions and thus obtain the oracle property. We demonstrate the statistical and computational efficiency of PERMITS with both synthetic and real data. The implementation of the proposed method can be found in https://github.com/abess-team/PERMITS. [abs] [ pdf ][ bib ] [ code ] &copy JMLR 2025. ( edit, beta )

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Journal of Machine Learning Research
Archive span
2000-2026
Indexed papers
4180
Paper id
1124227905640163724