Arrow Research search

Author name cluster

Andrei Graur

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

NeurIPS Conference 2023 Conference Paper

Parallel Submodular Function Minimization

  • Deeparnab Chakrabarty
  • Andrei Graur
  • Haotian Jiang
  • Aaron Sidford

We consider the parallel complexity of submodular function minimization (SFM). We provide a pair of methods which obtain two new query versus depth trade-offs a submodular function defined on subsets of $n$ elements that has integer values between $-M$ and $M$. The first method has depth $2$ and query complexity $n^{O(M)}$ and the second method has depth $\widetilde{O}(n^{1/3} M^{2/3})$ and query complexity $O(\mathrm{poly}(n, M))$. Despite a line of work on improved parallel lower bounds for SFM, prior to our work the only known algorithms for parallel SFM either followed from more general methods for sequential SFM or highly-parallel minimization of convex $\ell_2$-Lipschitz functions. Interestingly, to obtain our second result we provide the first highly-parallel algorithm for minimizing $\ell_\infty$-Lipschitz function over the hypercube which obtains near-optimal depth for obtaining constant accuracy.

FOCS Conference 2023 Conference Paper

Sparse Submodular Function Minimization

  • Andrei Graur
  • Haotian Jiang
  • Aaron Sidford

In this paper we study the problem of minimizing a submodular function $f: 2^{V} \rightarrow \mathbb{R}$ that is guaranteed to have a k-sparse minimizer. We give a deterministic algorithm that computes an additive $\epsilon$-approximate minimizer of such f in $\widetilde{O}(\operatorname{poly}(k) \log (|f| / \epsilon))$ parallel depth using a polynomial number of queries to an evaluation oracle of f, where $|f|=\max_{S \subseteq V}|f(S)|$. Further, we give a randomized algorithm that computes an exact minimizer of f with high probability using $\widetilde{O}(|V| \cdot \operatorname{poly}(k))$ queries and polynomial time. When $k=\widetilde{O}(1)$, our algorithms use either nearly-constant parallel depth or a nearly-linear number of evaluation oracle queries. All previous algorithms for this problem either use $\Omega(|V|)$ parallel depth or $\Omega\left(|V|^{2}\right)$ queries. In contrast to state-of-the-art weakly-polynomial and strongly-polynomial time algorithms for SFM, our algorithms use first-order optimization methods, e. g. , mirror descent and follow the regularized leader. We introduce what we call sparse dual certificates, which encode information on the structure of sparse minimizers, and both our parallel and sequential algorithms provide new algorithmic tools for allowing first-order optimization methods to efficiently compute them. Correspondingly, our algorithm does not invoke fast matrix multiplication or general linear system solvers and in this sense is more combinatorial than previous state-of-the-art methods.

FOCS Conference 2022 Conference Paper

Improved Lower Bounds for Submodular Function Minimization

  • Deeparnab Chakrabarty
  • Andrei Graur
  • Haotian Jiang
  • Aaron Sidford

We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). Applying this technique, we prove that any deterministic SFM algorithm on a ground set of n elements requires at least $\Omega(n\log n)$ queries to an evaluation oracle. This is the first super-linear query complexity lower bound for SFM and improves upon the previous best lower bound of 2n given by [Graur et al. , ITCS 2020]. Using our construction, we also prove that any (possibly randomized) parallel SFM algorithm, which can make up to poly $(n)$ queries per round, requires at least $\Omega(n/\log n)$ rounds to minimize a submodular function. This improves upon the previous best lower bound of $\tilde{\Omega}(n^{1/3})$ rounds due to [Chakrabarty et al. , FOCS 2021], and settles the parallel complexity of query-efficient SFM up to logarithmic factors due to a recent advance in [Jiang, SODA 2021].