STOC 2023
Optimal Differentially Private Learning of Thresholds and Quasi-Concave Optimization
Abstract
The problem of learning threshold functions is a fundamental one in machine learning. Classical learning theory implies sample complexity of O (ξ −1 log(1/β)) (for generalization error ξ with confidence 1−β). The private version of the problem, however, is more challenging and in particular, the sample complexity must depend on the size | X | of the domain. Progress on quantifying this dependence, via lower and upper bounds, was made in a line of works over the past decade. In this paper, we finally close the gap for approximate-DP and provide a nearly tight upper bound of O (log * | X |), which matches a lower bound by Alon et al (that applies even with improper learning) and improves over a prior upper bound of O ((log * | X |) 1.5 ) by Kaplan et al. We also provide matching upper and lower bounds of Θ(2 log * | X | ) for the additive error of private quasi-concave optimization (a related and more general problem). Our improvement is achieved via the novel Reorder-Slice-Compute paradigm for private data analysis which we believe will have further applications.
Authors
Keywords
Context
- Venue
- ACM Symposium on Theory of Computing
- Archive span
- 1969-2025
- Indexed papers
- 4364
- Paper id
- 660660352535801384