Arrow Research search

Author name cluster

Adrian Vetta

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

17 papers
2 author rows

Possible papers

17

AAAI Conference 2025 Conference Paper

Eliminating Majority Illusion Is Easy

  • Jack Dippel
  • Max Dupré la Tour
  • April Niu
  • Sanjukta Roy
  • Adrian Vetta

Majority illusion is a phenomenon in social networks wherein the decision by the majority of the network is not the same as one's personal social circle's majority, leading to an incorrect perception of the majority in a large network. We present polynomial-time algorithms which completely eliminate majority illusion by altering as few connections in the network as possible. Eliminating majority illusion ensures each neighbourhood in the network has at least a 1/2-fraction of the majority winner. This result is surprising as partially eliminating majority illusion is NP-hard. We generalize the majority illusion problem to an arbitrary fraction p and show that the problem of ensuring all neighbourhoods in the network contain at least a p-fraction of nodes consistent with a given preference is NP-hard, for nearly all values of p.

STOC Conference 2025 Conference Paper

Six Candidates Suffice to Win a Voter Majority

  • Moses Charikar
  • Alexandra Lassota
  • Prasanna Ramakrishnan
  • Adrian Vetta
  • Kangning Wang 0001

A cornerstone of social choice theory is Condorcet’s paradox which says that in an election where n voters rank m candidates it is possible that, no matter which candidate is declared the winner, a majority of voters would have preferred an alternative candidate. Instead, can we always choose a small committee of winning candidates that is preferred to any alternative candidate by a majority of voters? Elkind, Lang, and Saffidine raised this question and called such a committee a Condorcet winning set . They showed that winning sets of size 2 may not exist, but sets of size logarithmic in the number of candidates always do. In this work, we show that Condorcet winning sets of size 6 always exist, regardless of the number of candidates or the number of voters. More generally, we show that if α/1 − lnα ≥ 2/ k + 1, then there always exists a committee of size k such that less than an α fraction of the voters prefer an alternate candidate. These are the first nontrivial positive results that apply for all k ≥ 2. Our proof uses the probabilistic method and the minimax theorem, inspired by recent work on approximately stable committee selection . We construct a distribution over committees that performs sufficiently well (when compared against any candidate on any small subset of the voters) so that this distribution must contain a committee with the desired property in its support.

AAMAS Conference 2024 Conference Paper

Gerrymandering Planar Graphs

  • Jack Dippel
  • Max Dupré la Tour
  • April Niu
  • Sanjukta Roy
  • Adrian Vetta

We study the computational complexity of the map redistricting problem (gerrymandering). Mathematically, the electoral district designer (gerrymanderer) attempts to partition a weighted graph into 𝑘 connected components (districts) such that its candidate (party) wins as many districts as possible. Prior work has principally concerned the special cases where the graph is a path or a tree. Our focus concerns the realistic case where the graph is planar. We prove that the gerrymandering problem is solvable in polynomial time in 𝜆-outerplanar graphs, when the number of candidates and 𝜆 are constants and the vertex weights (voting weights) are polynomially bounded. In contrast, the problem is NP-complete in general planar graphs even with just two candidates. This motivates the study of approximation algorithms for gerrymandering planar graphs. However, when the number of candidates is large, we prove it is hard to distinguish between instances where the gerrymanderer cannot win a single district and instances where the gerrymanderer can win at least one district. This immediately implies that the redistricting problem is inapproximable in polynomial time in planar graphs, unless P=NP. This conclusion appears terminal for the design of good approximation algorithms – but it is not. The inapproximability bound can be circumvented as it only applies when the maximum number of districts the gerrymanderer can win is extremely small, say one. Indeed, for a fixed number of candidates, our main result is that there is a constant factor approximation algorithm for redistricting unweighted planar graphs, provided the optimal value is a large enough constant.

FOCS Conference 2015 Conference Paper

Polylogarithmic Approximations for the Capacitated Single-Sink Confluent Flow Problem

  • F. Bruce Shepherd
  • Adrian Vetta
  • Gordon T. Wilfong

A single-sink confluent flow is a routing of multiple demands to a sink r such that any flow exiting a node v must use a single arc. Hence, a confluent flow routes on a tree within the network. In uncapacitated (or uniform-capacity) networks, there is an O(1)-approximation algorithm for demand maximization and a logarithmic approximation algorithm for congestion minimization [6]. We study the case of capacitated networks, where each node v has its own capacity μ(v). Indeed, it was recently shown that demand maximization is in approximable to within polynomial factors in capacitated networks [20]. We circumvent this lower bound in two ways. First, we prove that there is a polylogarithmic approximation algorithm for demand maximization in networks that satisfy the ubiquitous no-bottleneck assumption (NBA). Second, we show a bicriteria result for capacitated networks without the NBA: there is a polylog factor approximation guarantee for demand maximization provided we allow congestion 2. We model the capacitated confluent flows problem using a multilayer linear programming formulation. At the heart of our approach for demand maximization is a rounding procedure for flows on multilayer networks which can be viewed as a proposal algorithm for an extension of stable matchings. In addition, the demand maximization algorithms require, as a subroutine, an algorithm for approximate congestion minimization in a special class of capacitated networks that may be of independent interest. Specifically, we present a polylogarithmic approximation algorithm for congestion minimization in monotonic networks - those networks with the property that μ(u) ≤ μ(v) for each arc (u, v).

AAAI Conference 2014 Conference Paper

False-Name Bidding and Economic Efficiency in Combinatorial Auctions

  • Colleen Alkalay-Houlihan
  • Adrian Vetta

Combinatorial auctions are multiple-item auctions in which bidders may place bids on any package (subset) of goods. This additional expressibility produces benefits that have led to combinatorial auctions becoming extremely important both in practice and in theory. In the computer science community, auction design has focused primarily on computational practicality and incentive compatibility. The latter concerns mechanisms that are resistant to bidders misrepresenting themselves via a single false identity; however, with modern forms of bid submission, such as electronic bidding, other types of cheating have become feasible. Prominent amongst them is false-name bidding; that is, bidding under pseudonyms. For example, the ubiquitous Vickrey- Clarke-Groves (VCG) mechanism is incentive compatible and produces optimal allocations, but it is not falsename-proof – bidders can increase their utility by submitting bids under multiple identifiers. Thus, there has recently been much interest in the design and analysis of false-name-proof auction mechanisms. These falsename-proof mechanisms, however, have polynomially small efficiency guarantees: they can produce allocations with very low economic efficiency/social welfare. In contrast, we show that, provided the degree to which different goods are complementary is bounded (as is the case in many important, practical auctions), the VCG mechanism gives a constant efficiency guarantee. Constant efficiency guarantees hold even at equilibria where the agents bid in a manner that is not individually rational. Thus, while an individual bidder may personally benefit greatly from making false-name bids, this will have only a small detrimental effect on the objective of the auctioneer: maximizing economic efficiency. So, from the auctioneer’s viewpoint the VCG mechanism remains preferable to false-name-proof mechanisms.

NeurIPS Conference 2014 Conference Paper

Randomized Experimental Design for Causal Graph Discovery

  • Huining Hu
  • Zhentao Li
  • Adrian Vetta

We examine the number of controlled experiments required to discover a causal graph. Hauser and Buhlmann showed that the number of experiments required is logarithmic in the cardinality of maximum undirected clique in the essential graph. Their lower bounds, however, assume that the experiment designer cannot use randomization in selecting the experiments. We show that significant improvements are possible with the aid of randomization – in an adversarial (worst-case) setting, the designer can then recover the causal graph using at most O(log log n) experiments in expectation. This bound cannot be improved; we show it is tight for some causal graphs. We then show that in a non-adversarial (average-case) setting, even larger improvements are possible: if the causal graph is chosen uniformly at random under a Erdös-Rényi model then the expected number of experiments to discover the causal graph is constant. Finally, we present computer simulations to complement our theoretic results. Our work exploits a structural characterization of essential graphs by Andersson et al. Their characterization is based upon a set of orientation forcing operations. Our results show a distinction between which forcing operations are most important in worst-case and average-case settings.

SODA Conference 2012 Conference Paper

Approximating rooted Steiner networks

  • Joseph Cheriyan
  • Bundit Laekhanukit
  • Guyslain Naves
  • Adrian Vetta

The Directed Steiner Tree (DST) problem is a cornerstone problem in network design. We focus on the generalization of the problem with higher connectivity requirements. The problem with one root and two sinks is APX-hard. The problem with one root and many sinks is as hard to approximate as the directed Steiner forest problem, and the latter is well known to be as hard to approximate as the label cover problem. Utilizing previous techniques (due to others), we strengthen these results and extend them to undirected graphs. Specifically, we give an Ω( k ∊ ) hardness bound for the rooted k -connectivity problem in undirected graphs; this addresses a recent open question of Khanna. As a consequence, we also obtain the Ω( k ∊ ) hardness of the undirected subset k -connectivity problem. Additionally, we give a result on the integrality ratio of the natural linear programming relaxation of the directed rooted k -connectivity problem.

STOC Conference 2005 Conference Paper

Approximation algorithms for network design with metric costs

  • Joseph Cheriyan
  • Adrian Vetta

We study undirected networks with edge costs that satisfy the triangle inequality. Let n denote the number of nodes. We present an O (1)-approximation algorithm for a generalization of the metric-cost subset k -node-connectivity problem. Our approximation guarantee is proved via lower bounds that apply to the simple edge-connectivity version of the problem, where the requirements are for edge-disjoint paths rather than for openly node-disjoint paths. A corollary is that, for metric costs and for each k =1,2,…, n -1, there exists a k -node connected graph whose cost is within a factor of 24 of the cost of any simple k -edge connected graph. This resolves an open question in the area. Based on our O (1)-approximation algorithm, we present an O (log r max )-approximation algorithm for the node-connectivity survivable network design problem where r max denotes the maximum requirement over all pairs of nodes. Our results contrast with the case of edge costs of zero or one, where Kortsarz et al. [20]recently proved, assuming NP⊈, quasi-P, a hardness-of-approximation lower bound of 2 log 1-ε n for the subset k -node-connectivity problem, where ε denotes a small positive number.

FOCS Conference 2005 Conference Paper

Nash Equilibria in Random Games

  • Imre Bárány
  • Santosh S. Vempala
  • Adrian Vetta

We consider Nash equilibria in 2-player random games and analyze a simple Las Vegas algorithm for finding an equilibrium. The algorithm is combinatorial and always finds a Nash equilibrium; on m /spl times/ n payoff matrices, it runs in time O(m/sup 2/n log log n + n/sup 2/m log log m) with high probability. Our main tool is a polytope formulation of equilibria.

FOCS Conference 2005 Conference Paper

Sink Equilibria and Convergence

  • Michel X. Goemans
  • Vahab Mirrokni
  • Adrian Vetta

We introduce the concept of a sink equilibrium. A sink equilibrium is a strongly connected component with no outgoing arcs in the strategy profile graph associated with a game. The strategy profile graph has a vertex set induced by the set of pure strategy profiles; its arc set corresponds to transitions between strategy profiles that occur with nonzero probability. (Here our focus will just be on the special case in which the strategy profile graph is actually a best response graph; that is, its arc set corresponds exactly to best response moves that result from myopic or greedy behaviour). We argue that there is a natural convergence process to sink equilibria in games where agents use pure strategies. This leads to an alternative measure of the social cost of a lack of coordination, the price of sinking, which measures the worst case ratio between the value of a sink equilibrium and the value of the socially optimal solution. We define the value of a sink equilibrium to be the expected social value of the steady state distribution induced by a random walk on that sink. We illustrate the value of this measure in three ways. Firstly, we show that it may more accurately reflects the inefficiency of uncoordinated solutions in competitive games when the use of pure strategies is the norm. In particular, we give an example (a valid-utility game) in which the game converges to solutions which are a factor n worse than socially optimal. The price of sinking is indeed n, but the price of anarchy is close to 1. Secondly, sink equilibria always exist. Thus, even in games in which pure strategy Nash equilibria (PSNE) do not exist, we can still calculate the price of sinking. Thirdly, we show that bounding the price of sinking can have important implications for the speed of convergence to socially good solutions in games where the agents make best response moves in a random order. We present two examples to illustrate our ideas. (i) Unsplittable selfish routing (and weighted congestion games): we prove that the price of sinking for the weighted unsplittable flow version of the selfish routing problem (for bounded-degree polynomial latency functions) is at most O(2/sup 2d/ d/sup 2d + 3/). In comparison, we give instances of these games without any PSNE. Moreover, our proof technique implies fast convergence to socially good (approximate) solutions. This is in contrast to the negative result of Fabrikant, Papadimitriou, and Talwar (2004) showing the existence of exponentially long best-response paths. (ii) Valid-utility games: we show that for valid-utility games the price of sinking is at most n+1; thus the worst case price of sinking in a valid-utility game is between it and n+1. We use our proof to show fast convergence to constant factor approximate solutions in basic-utility games. In addition, we present a hardness result which shows that, in general, there might be states that are exponentially far from any sink equilibrium in valid-utility games. We prove this by showing that the problem of finding a sink equilibrium (or a PSNE) in valid-utility games is PLS-complete.

FOCS Conference 2002 Conference Paper

Nash Equilibria in Competitive Societies, with Applications to Facility Location, Traffic Routing and Auctions

  • Adrian Vetta

We consider the following class of problems. The value of an outcome to a society is measured via a submodular utility function (submodularity has a natural economic interpretation: decreasing marginal utility). Decisions, however, are controlled by non-cooperative agents who seek to maximise their own private utility. We present, under basic assumptions, guarantees on the social performance of Nash equilibria. For submodular utility functions, any Nash equilibrium gives an expected social utility within a factor 2 of optimal, subject to a function-dependent additive term. For non-decreasing, submodular utility functions, any Nash equilibrium gives an expected social utility within a factor 1+/spl delta/ of optimal, where 0/spl les//spl delta//spl les/1 is a number based upon discrete curvature of the function. A condition under which all sets of social and private utility functions induce pure strategy Nash equilibria is presented. The case in which agents themselves make use of approximation algorithms in decision making is discussed and performance guarantees given. Finally we present specific problems that fall into our framework. These include competitive versions of the facility location problem and k-median problem, a maximisation version of the traffic routing problem studied by Roughgarden and Tardos (2000), and multiple-item auctions.

FOCS Conference 2000 Conference Paper

On Clusterings - Good, Bad and Spectral

  • Ravindran Kannan
  • Santosh S. Vempala
  • Adrian Vetta

We propose a new measure for assessing the quality of a clustering. A simple heuristic is shown to give worst-case guarantees under the new measure. Then we present two results regarding the quality of the clustering found by a popular spectral algorithm. One proffers worst case guarantees whilst the other shows that if there exists a "good" clustering then the spectral algorithm will find one close to it.