Arrow Research search

Author name cluster

Alina Ene

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

32 papers
2 author rows

Possible papers

32

ICML Conference 2025 Conference Paper

Maximum Coverage in Turnstile Streams with Applications to Fingerprinting Measures

  • Alina Ene
  • Alessandro Epasto
  • Vahab Mirrokni
  • Hoai-An Nguyen
  • Huy L. Nguyen 0001
  • David P. Woodruff
  • Peilin Zhong

In the maximum coverage problem we are given $d$ subsets from a universe $[n]$, and the goal is to output $k$ subsets such that their union covers the largest possible number of distinct items. We present the first algorithm for maximum coverage in the turnstile streaming model, where updates which insert or delete an item from a subset come one-by-one. Notably our algorithm only uses $poly\log n$ update time. We also present turnstile streaming algorithms for targeted and general fingerprinting for risk management where the goal is to determine which features pose the greatest re-identification risk in a dataset. As part of our work, we give a result of independent interest: an algorithm to estimate the complement of the $p^{\text{th}}$ frequency moment of a vector for $p \geq 2$. Empirical evaluation confirms the practicality of our fingerprinting algorithms demonstrating a speedup of up to $210$x over prior work.

AAAI Conference 2025 Conference Paper

Online and Streaming Algorithms for Constrained k-Submodular Maximization

  • Fabian Christian Spaeh
  • Alina Ene
  • Huy Nguyen

Constrained k-submodular maximization is a general framework that captures many discrete optimization problems such as ad allocation, influence maximization, personalized recommendation, and many others. In many of these applications, datasets are large or decisions need to be made in an online manner, which motivates the development of efficient streaming and online algorithms. In this work, we develop single-pass streaming and online algorithms for constrained k-submodular maximization with both monotone and general (possibly non-monotone) objectives subject to cardinality and knapsack constraints. Our algorithms achieve provable constant-factor approximation guarantees which improve upon the state of the art in almost all settings. Moreover, they achieve the fastest known running times and have optimal space usage. We experimentally evaluate our algorithms on instances for ad allocation and other applications, where we observe that our algorithms are practical and scalable, and construct solutions that are comparable in value even to offline greedy algorithms.

NeurIPS Conference 2025 Conference Paper

Quasi-Self-Concordant Optimization with $\ell_{\infty}$ Lewis Weights

  • Alina Ene
  • Ta Duy Nguyen
  • Adrian Vladu

In this paper, we study the problem $\min_{x\in R^{d}, Nx=v}\sum\_{i=1}^{n}f((Ax-b)_{i})$ for a quasi-self-concordant function $f: R\to R$, where $A, N$ are $n\times d$ and $m\times d$ matrices, $b, v$ are vectors of length $n$ and $m$ with $n\ge d. $ We show an algorithm based on a trust-region method with an oracle that can be implemented using $\widetilde{O}(d^{1/3})$ linear system solves, improving the $\widetilde{O}(n^{1/3})$ oracle by [Adil-Bullins-Sachdeva, NeurIPS 2021]. Our implementation of the oracle relies on solving the overdetermined $\ell\_{\infty}$-regression problem $\min\_{x\in R^{d}, Nx=v}\|Ax-b\|\_{\infty}$. We provide an algorithm that finds a $(1+\epsilon)$-approximate solution to this problem using $O((d^{1/3}/\epsilon+1/\epsilon^{2})\log(n/\epsilon))$ linear system solves. This algorithm leverages $\ell\_{\infty}$ Lewis weight overestimates and achieves this iteration complexity via a simple lightweight IRLS approach, inspired by the work of [Ene-Vladu, ICML 2019]. Experimentally, we demonstrate that our algorithm significantly improves the runtime of the standard CVX solver.

ICML Conference 2024 Conference Paper

Multiplicative Weights Update, Area Convexity and Random Coordinate Descent for Densest Subgraph Problems

  • Ta Duy Nguyen
  • Alina Ene

We study the densest subgraph problem and give algorithms via multiplicative weights update and area convexity that converge in $O\left(\frac{\log m}{\epsilon^{2}}\right)$ and $O\left(\frac{\log m}{\epsilon}\right)$ iterations, respectively, both with nearly-linear time per iteration. Compared with the work by Bahmani et al. (2014), our MWU algorithm uses a very different and much simpler procedure for recovering the dense subgraph from the fractional solution and does not employ a binary search. Compared with the work by Boob et al. (2019), our algorithm via area convexity improves the iteration complexity by a factor $\Delta$—the maximum degree in the graph, and matches the fastest theoretical runtime currently known via flows (Chekuri et al. , 2022) in total time. Next, we study the dense subgraph decomposition problem and give the first practical iterative algorithm with linear convergence rate $O\left(mn\log\frac{1}{\epsilon}\right)$ via accelerated random coordinate descent. This significantly improves over $O\left(\frac{m\sqrt{mn\Delta}}{\epsilon}\right)$ time of the FISTA-based algorithm by Harb et al. (2022). In the high precision regime $\epsilon\ll\frac{1}{n}$ where we can even recover the exact solution, our algorithm has a total runtime of $O\left(mn\log n\right)$, matching the state of the art exact algorithm via parametric flows (Gallo et al. , 1989). Empirically, we show that this algorithm is very practical and scales to very large graphs, and its performance is competitive with widely used methods that have significantly weaker theoretical guarantees.

ICML Conference 2023 Conference Paper

High Probability Convergence of Stochastic Gradient Methods

  • Zijian Liu 0003
  • Ta Duy Nguyen
  • Thien Hang Nguyen
  • Alina Ene
  • Huy L. Nguyen 0001

In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. The method can be applied to the non-convex case. We demonstrate an $O((1+\sigma^{2}\log(1/\delta))/T+\sigma/\sqrt{T})$ convergence rate when the number of iterations $T$ is known and an $O((1+\sigma^{2}\log(T/\delta))/\sqrt{T})$ convergence rate when $T$ is unknown for SGD, where $1-\delta$ is the desired success probability. These bounds improve over existing bounds in the literature. We also revisit AdaGrad-Norm (Ward et al. , 2019) and show a new analysis to obtain a high probability bound that does not require the bounded gradient assumption made in previous works. The full version of our paper contains results for the standard per-coordinate AdaGrad.

ICLR Conference 2023 Conference Paper

On the Convergence of AdaGrad(Norm) on ℝ d: Beyond Convexity, Non-Asymptotic Rate and Acceleration

  • Zijian Liu 0003
  • Ta Duy Nguyen
  • Alina Ene
  • Huy L. Nguyen 0001

Existing analysis of AdaGrad and other adaptive methods for smooth convex optimization is typically for functions with bounded domain diameter. In unconstrained problems, previous works guarantee an asymptotic convergence rate without an explicit constant factor that holds true for the entire function class. Furthermore, in the stochastic setting, only a modified version of AdaGrad, different from the one commonly used in practice, in which the latest gradient is not used to update the stepsize, has been analyzed. Our paper aims at bridging these gaps and developing a deeper understanding of AdaGrad and its variants in the standard setting of smooth convex functions as well as the more general setting of quasar convex functions. First, we demonstrate new techniques to explicitly bound the convergence rate of the vanilla AdaGrad for unconstrained problems in both deterministic and stochastic settings. Second, we propose a variant of AdaGrad for which we can show the convergence of the last iterate, instead of the average iterate. Finally, we give new accelerated adaptive algorithms and their convergence guarantee in the deterministic setting with explicit dependency on the problem parameters, improving upon the asymptotic rate shown in previous works.

NeurIPS Conference 2023 Conference Paper

On the Generalization Error of Stochastic Mirror Descent for Quadratically-Bounded Losses: an Improved Analysis

  • Ta Duy Nguyen
  • Alina Ene
  • Huy Nguyen

In this work, we revisit the generalization error of stochastic mirror descent for quadratically bounded losses studied in Telgarsky (2022). Quadratically bounded losses is a broad class of loss functions, capturing both Lipschitz and smooth functions, for both regression and classification problems. We study the high probability generalization for this class of losses on linear predictors in both realizable and non-realizable cases when the data are sampled IID or from a Markov chain. The prior work relies on an intricate coupling argument between the iterates of the original problem and those projected onto a bounded domain. This approach enables blackbox application of concentration inequalities, but also leads to suboptimal guarantees due in part to the use of a union bound across all iterations. In this work, we depart significantly from the prior work of Telgarsky (2022), and introduce a novel approach for establishing high probability generalization guarantees. In contrast to the prior work, our work directly analyzes the moment generating function of a novel supermartingale sequence and leverages the structure of stochastic mirror descent. As a result, we obtain improved bounds in all aforementioned settings. Specifically, in the realizable case and non-realizable case with light-tailed sub-Gaussian data, we improve the bounds by a $\log T$ factor, matching the correct rates of $1/T$ and $1/\sqrt{T}$, respectively. In the more challenging case of heavy-tailed polynomial data, we improve the existing bound by a $\mathrm{poly}\ T$ factor.

NeurIPS Conference 2023 Conference Paper

Online Ad Allocation with Predictions

  • Fabian Spaeh
  • Alina Ene

Display Ads and the generalized assignment problem are two well-studied online packing problems with important applications in ad allocation and other areas. In both problems, ad impressions arrive online and have to be allocated immediately to budget-constrained advertisers. Worst-case algorithms that achieve the ideal competitive ratio are known for both problems, but might act overly conservative given the predictable and usually tame nature of real-world input. Given this discrepancy, we develop an algorithm for both problems that incorporate machine-learned predictions and can thus improve the performance beyond the worst-case. Our algorithm is based on the work of Feldman et al. (2009) and similar in nature to Mahdian et al. (2007) who were the first to develop a learning-augmented algorithm for the related, but more structured Ad Words problem. We use a novel analysis to show that our algorithm is able to capitalize on a good prediction, while being robust against poor predictions. We experimentally evaluate our algorithm on synthetic and real-world data on a wide range of predictions. Our algorithm is consistently outperforming the worst-case algorithm without predictions.

ICML Conference 2022 Conference Paper

Adaptive Accelerated (Extra-)Gradient Methods with Variance Reduction

  • Zijian Liu 0003
  • Ta Duy Nguyen
  • Alina Ene
  • Huy L. Nguyen 0001

In this paper, we study the finite-sum convex optimization problem focusing on the general convex case. Recently, the study of variance reduced (VR) methods and their accelerated variants has made exciting progress. However, the step size used in the existing VR algorithms typically depends on the smoothness parameter, which is often unknown and requires tuning in practice. To address this problem, we propose two novel adaptive VR algorithms: Adaptive Variance Reduced Accelerated Extra-Gradient (AdaVRAE) and Adaptive Variance Reduced Accelerated Gradient (AdaVRAG). Our algorithms do not require knowledge of the smoothness parameter. AdaVRAE uses $\mathcal{O}\left(n\log\log n+\sqrt{\frac{n\beta}{\epsilon}}\right)$ and AdaVRAG uses $\mathcal{O}\left(n\log\log n+\sqrt{\frac{n\beta\log\beta}{\epsilon}}\right)$ gradient evaluations to attain an $\mathcal{O}(\epsilon)$-suboptimal solution, where $n$ is the number of functions in the finite sum and $\beta$ is the smoothness parameter. This result matches the best-known convergence rate of non-adaptive VR methods and it improves upon the convergence of the state of the art adaptive VR method, AdaSVRG. We demonstrate the superior performance of our algorithms compared with previous methods in experiments on real-world datasets.

AAAI Conference 2022 Conference Paper

Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence

  • Alina Ene
  • Huy Lê Nguyễn

We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods and match the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings.

ICML Conference 2022 Conference Paper

Streaming Algorithm for Monotone k-Submodular Maximization with Cardinality Constraints

  • Alina Ene
  • Huy L. Nguyen 0001

Maximizing a monotone k-submodular function subject to cardinality constraints is a general model for several applications ranging from influence maximization with multiple products to sensor placement with multiple sensor types and online ad allocation. Due to the large problem scale in many applications and the online nature of ad allocation, a need arises for algorithms that process elements in a streaming fashion and possibly make online decisions. In this work, we develop a new streaming algorithm for maximizing a monotone k-submodular function subject to a per-coordinate cardinality constraint attaining an approximation guarantee close to the state of the art guarantee in the offline setting. Though not typical for streaming algorithms, our streaming algorithm also readily applies to the online setting with free disposal. Our algorithm is combinatorial and enjoys fast running time and small number of function evaluations. Furthermore, its guarantee improves as the cardinality constraints get larger, which is especially suited for the large scale applications. For the special case of maximizing a submodular function with large budgets, our combinatorial algorithm matches the guarantee of the state-of-the-art continuous algorithm, which requires significantly more time and function evaluations.

AAAI Conference 2021 Conference Paper

Adaptive Gradient Methods for Constrained Convex Optimization and Variational Inequalities

  • Alina Ene
  • Huy L. Nguyen
  • Adrian Vladu

We provide new adaptive first-order methods for constrained convex optimization. Our main algorithms ADAACSA and ADAAGD+ are accelerated methods, which are universal in the sense that they achieve nearly-optimal convergence rates for both smooth and non-smooth functions, even when they only have access to stochastic gradients. In addition, they do not require any prior knowledge on how the objective function is parametrized, since they automatically adjust their percoordinate learning rate. These can be seen as truly accelerated ADAGRAD methods for constrained optimization. We complement them with a simpler algorithm ADAGRAD+ which enjoys the same features, and achieves the standard non-accelerated convergence rate. We also present a set of new results involving adaptive methods for unconstrained optimization and monotone operators.

AAAI Conference 2021 Conference Paper

Projection-Free Bandit Optimization with Privacy Guarantees

  • Alina Ene
  • Huy L. Nguyen
  • Adrian Vladu

We design differentially private algorithms for the bandit convex optimization problem in the projection-free setting. This setting is important whenever the decision set has a complex geometry, and access to it is done efficiently only through a linear optimization oracle, hence Euclidean projections are unavailable (e. g. matroid polytope, submodular base polytope). This is the first differentially-private algorithm for projection-free bandit optimization, and in fact our bound matches the best known non-private projection-free algorithm and the best known private algorithm, even for the weaker setting when projections are available.

ICML Conference 2020 Conference Paper

Parallel Algorithm for Non-Monotone DR-Submodular Maximization

  • Alina Ene
  • Huy L. Nguyen 0001

In this work, we give a new parallel algorithm for the problem of maximizing a non-monotone diminishing returns submodular function subject to a cardinality constraint. For any desired accuracy $\epsilon$, our algorithm achieves a $1/e - \epsilon$ approximation using $O(\log{n} \log(1/\epsilon) / \epsilon^3)$ parallel rounds of function evaluations. The approximation guarantee nearly matches the best approximation guarantee known for the problem in the sequential setting and the number of parallel rounds is nearly-optimal for any constant $\epsilon$. Previous algorithms achieve worse approximation guarantees using $\Omega(\log^2{n})$ parallel rounds. Our experimental evaluation suggests that our algorithm obtains solutions whose objective value nearly matches the value obtained by the state of the art sequential algorithms, and it outperforms previous parallel algorithms in number of parallel rounds, iterations, and solution quality.

ICML Conference 2019 Conference Paper

Improved Convergence for $\ell_1$ and $\ell_∞$ Regression via Iteratively Reweighted Least Squares

  • Alina Ene
  • Adrian Vladu

The iteratively reweighted least squares method (IRLS) is a popular technique used in practice for solving regression problems. Various versions of this method have been proposed, but their theoretical analyses failed to capture the good practical performance. In this paper we propose a simple and natural version of IRLS for solving $\ell_\infty$ and $\ell_1$ regression, which provably converges to a $(1+\epsilon)$-approximate solution in $O(m^{1/3}\log(1/\epsilon)/\epsilon^{2/3} + \log m/\epsilon^2)$ iterations, where $m$ is the number of rows of the input matrix. Interestingly, this running time is independent of the conditioning of the input, and the dominant term of the running time depends sublinearly in $\epsilon^{-1}$, which is atypical for the optimization of non-smooth functions. This improves upon the more complex algorithms of Chin et al. (ITCS ’12), and Christiano et al. (STOC ’11) by a factor of at least $1/\epsilon^2$, and yields a truly efficient natural algorithm for the slime mold dynamics (Straszak-Vishnoi, SODA ’16, ITCS ’16, ITCS ’17).

STOC Conference 2019 Conference Paper

Submodular maximization with matroid and packing constraints in parallel

  • Alina Ene
  • Huy L. Nguyen 0001
  • Adrian Vladu

We consider the problem of maximizing the multilinear extension of a submodular function subject a single matroid constraint or multiple packing constraints with a small number of adaptive rounds of evaluation queries. We obtain the first algorithms with low adaptivity for submodular maximization with a matroid constraint. Our algorithms achieve a 1−1/ e −є approximation for monotone functions and a 1/ e −є approximation for non-monotone functions, which nearly matches the best guarantees known in the fully adaptive setting. The number of rounds of adaptivity is O (log 2 n /є 3 ), which is an exponential speedup over the existing algorithms. We obtain the first parallel algorithm for non-monotone submodular maximization subject to packing constraints. Our algorithm achieves a 1/ e −є approximation using O (log( n /є) log(1/є) log( n + m )/ є 2 ) parallel rounds, which is again an exponential speedup in parallel time over the existing algorithms. For monotone functions, we obtain a 1−1/ e −є approximation in O (log( n /є)log m /є 2 ) parallel rounds. The number of parallel rounds of our algorithm matches that of the state of the art algorithm for solving packing LPs with a linear objective (Mahoney et al., 2016). Our results apply more generally to the problem of maximizing a diminishing returns submodular (DR-submodular) function.

NeurIPS Conference 2017 Conference Paper

Decomposable Submodular Function Minimization: Discrete and Continuous

  • Alina Ene
  • Huy Nguyen
  • László Végh

This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization. We provide improved running time estimates for the state-of-the-art continuous algorithms for the problem using combinatorial arguments. We also provide a systematic experimental comparison of the two types of methods, based on a clear distinction between level-0 and level-1 algorithms.

FOCS Conference 2016 Conference Paper

A New Framework for Distributed Submodular Maximization

  • Rafael da Ponte Barbosa
  • Alina Ene
  • Huy L. Nguyen 0001
  • Justin Ward

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. A lot of recent effort has been devoted to developing distributed algorithms for these problems. However, these results suffer from high number of rounds, suboptimal approximation ratios, or both. We develop a framework for bringing existing algorithms in the sequential setting to the distributed setting, achieving near optimal approximation ratios for many settings in only a constant number of MapReduce rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint.

FOCS Conference 2016 Conference Paper

Constrained Submodular Maximization: Beyond 1/e

  • Alina Ene
  • Huy L. Nguyen 0001

In this work, we present a new algorithm for maximizing a non-monotone submodular function subject to a general constraint. Our algorithm finds an approximate fractional solution for maximizing the multilinear extension of the function over a down-closed polytope. The approximation guarantee is 0. 372 and it is the first improvement over the 1/e approximation achieved by the unified Continuous Greedy algorithm [Feldman et al. , FOCS 2011].

FOCS Conference 2016 Conference Paper

On Approximating Maximum Independent Set of Rectangles

  • Julia Chuzhoy
  • Alina Ene

We study the Maximum Independent Set of Rectangles (MISR) problem: given a set of n axis-parallel rectangles, find a largest-cardinality subset of the rectangles, such that no two of them overlap. MISR is a basic geometric optimization problem with many applications, that has been studied extensively. Until recently, the best approximation algorithm for it achieved an O(log log n)-approximation factor. In a recent breakthrough, Adamaszek and Wiese provided a quasi-polynomial time approximation scheme: a (1-ε)-approximation algorithm with running time n O(poly(log n)/ε). Despite this result, obtaining a PTAS or even a polynomial-time constant-factor approximation remains a challenging open problem. In this paper we make progress towards this goal by providing an algorithm for MISR that achieves a (1 - ε)-approximation in time n O(poly(log logn/ε)). We introduce several new technical ideas, that we hope will lead to further progress on this and related problems.

FOCS Conference 2015 Conference Paper

Online Buy-at-Bulk Network Design

  • Alina Ene
  • Deeparnab Chakrabarty
  • Ravishankar Krishnaswamy
  • Debmalya Panigrahi

We present the first non-trivial online algorithms for the non-uniform, multicommodity buy-at-bulk (MC-BB) network design problem. Our competitive ratios qualitatively match the best known approximation factors for the corresponding offline problems. In particular, we show: 1. A polynomial time online algorithm with a poly-logarithmic competitive ratio for the MC-BB problem in undirected edge-weighted graphs. 2. A quasi-polynomial time online algorithm with a poly-logarithmic competitive ratio for the MC-BB problem in undirected node-weighted graphs. 3. For any fixed ε > 0, a polynomial time online algorithm with a competitive ratio of O̅(k {1/2+ε} polylog(n)) (where k is the number of demands) for MC-BB in directed graphs. 4. Algorithms with matching competitive ratios for the prize-collecting variants of all the above problems. Prior to our work, a logarithmic competitive ratio was known for undirected, edge-weighted graphs only for the special case of uniform costs (Awerbuch and Azar, FOCS 1997), and a polylogarithmic competitive ratio was known for the edge-weighted single-sink problem (Meyerson, SPAA 2004). To the best of our knowledge, no previous online algorithm was known, even for uniform costs, in the node-weighted and directed settings. Our main engine for the results above is an online reduction theorem of MC-BB problems to their single-sink (SS-BB) counterparts. We use the concept of junction-tree solutions (Chekuri et al. , FOCS 2006) that play an important role in solving the offline versions of the problem via a greedy subroutine -- an inherently offline procedure. Our main technical contribution is in designing an online algorithm using only the existence of good junction-trees to reduce an MC-BB instance to multiple SS-BB sub-instances. Along the way, we also give the first non-trivial online node-weighted/directed single-sink buy-at-bulk algorithms. In addition to the new results, our generic reduction also yields new proofs of recent results for the online node-weighted Steiner forest and online group Steiner forest problems.

ICML Conference 2015 Conference Paper

Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

  • Alina Ene
  • Huy L. Nguyen 0001

Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of “simple” functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster \emphlinear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations.

ICML Conference 2015 Conference Paper

The Power of Randomization: Distributed Submodular Maximization on Massive Datasets

  • Rafael da Ponte Barbosa
  • Alina Ene
  • Huy L. Nguyen 0001
  • Justin Ward

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. Unfortunately, the resulting submodular optimization problems are often too large to be solved on a single machine. We consider a distributed, greedy algorithm that combines previous approaches with randomization. The result is an algorithm that is embarrassingly parallel and achieves provable, constant factor, worst-case approximation guarantees. In our experiments, we demonstrate its efficiency in large problems with different kinds of constraints with objective values always close to what is achievable in the centralized setting.

STOC Conference 2014 Conference Paper

Improved approximation algorithms for degree-bounded network design problems with node connectivity requirements

  • Alina Ene
  • Ali Vakilian

We consider degree bounded network design problems with element and vertex connectivity requirements. In the degree bounded Survivable Network Design (SNDP) problem, the input is an undirected graph G = ( V, E ) with weights w ( e ) on the edges and degree bounds b ( v ) on the vertices, and connectivity requirements r ( uv ) for each pair uv of vertices. The goal is to select a minimum-weight subgraph H of G that meets the connectivity requirements and it satisfies the degree bounds on the vertices: for each pair uv of vertices, H has r ( uv ) disjoint paths between u and v ; additionally, each vertex v is incident to at most b ( v ) edges in H . We give the first ( O (1), O (1) · b ( v )) bicriteria approximation algorithms for the degree-bounded SNDP problem with element connectivity requirements and for several degree-bounded SNDP problems with vertex connectivity requirements. Our algorithms construct a subgraph H whose weight is at most O (1) times the optimal such that each vertex v is incident to at most O (1) · b ( v ) edges in H . We can also extend our approach to network design problems in directed graphs with out-degree constraints to obtain ( O (1), O (1) · b + ( v )) bicriteria approximation.

STOC Conference 2012 Conference Paper

Approximation algorithms and hardness of integral concurrent flow

  • Parinya Chalermsook
  • Julia Chuzhoy
  • Alina Ene
  • Shi Li 0001

We study an integral counterpart of the classical Maximum Concurrent Flow problem, that we call Integral Concurrent Flow (ICF). In the basic version of this problem (basic-ICF), we are given an undirected n-vertex graph $G$ with edge capacities c(e), a subset T of vertices called terminals, and a demand D(t,t') for every pair (t,t') of the terminals. The goal is to find a maximum value λ, and a collection P of paths, such that every pair (t,t') of terminals is connected by ⌊ λ ⋅ D(t,t')⌋ paths in P, and the number of paths containing any edge e is at most c(e). We show an algorithm that achieves a poly log n-approximation for basic-ICF, while violating the edge capacities by only a constant factor. We complement this result by proving that no efficient algorithm can achieve a factor α-approximation with congestion c for any values α,c satisfying α ⋅ c=O(log log n/log log log n), unless NP ⊆ ZPTIME(n poly log n ). We then turn to study the more general group version of the problem (group=ICF), in which we are given a collection (S 1 ,T 1 ),...,(S k ,T k )} of pairs of vertex subsets, and for each 1 ≤ i ≤ k, a demand D i is specified. The goal is to find a maximum value λ and a collection P of paths, such that for each i, at least ⌊ λ ⋅ D i ⌋ paths connect the vertices of S i to the vertices of T i , while respecting the edge capacities. We show that for any 1 ≤ c ≤ O(log log n), no efficient algorithm can achieve a factor O(n 1/(2 2c+3 ) )-approximation with congestion c for the problem, unless NP ⊆ DTIME(n O(log log n) ). On the other hand, we show an efficient randomized algorithm that finds a poly log n-approximate solution with a constant congestion, if we are guaranteed that the optimal solution contains at least D ≥ k poly log n paths connecting every pair (S i ,T i ).

FOCS Conference 2011 Conference Paper

Approximation Algorithms for Submodular Multiway Partition

  • Chandra Chekuri
  • Alina Ene

We study algorithms for the SUBMODULAR Multiway PARTITION problem (SUB-MP). An instance of SUB-MP consists of a finite ground set V, a subset S = {s 1, S 2, .. ., s k } ⊆ V of k elements called terminals, and a non-negative submodular set function f: 2 V → ℝ + on V provided as a value oracle. The goal is to partition V into k sets A 1, .. ., A k to minimize Σ i=1 k f(A i ) such that for 1 ≤ i ≤ k, s i ∈ A i. SUB-MP generalizes some well-known problems such as the MULTIWAY CUT problem in graphs and hypergraphs, and the NODE-WEIGHED MULTIWAY Cut problem in graphs. SUB-MP for arbitrary sub- modular functions (instead of just symmetric functions) was considered by Zhao, Nagamochi and Ibaraki [29]. Previous algorithms were based on greedy splitting and divide and conquer strategies. In recent work [5] we proposed a convex-programming relaxation for SUB-MP based on the Lovasz-extension of a submodular function and showed its applicability for some special cases. In this paper we obtain the following results for arbitrary submodular functions via this relaxation. (1) A 2-approximation for SUB-MP. This improves the (k - 1)-approximation from [29]. (2) A (1. 5 - 1/k)-approximation for SUB-MP when f is symmetric. This improves the 2(1 - 1/k)-approximation from [23], [29].