Arrow Research search

Author name cluster

Éva Tardos

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

48 papers
2 author rows

Possible papers

48

IJCAI Conference 2025 Conference Paper

Online Resource Sharing: Better Robust Guarantees via Randomized Strategies

  • David X. Lin
  • Daniel Hall
  • Giannis Fikioris
  • Siddhartha Banerjee
  • Éva Tardos

We study the problem of fair online resource allocation via non-monetary mechanisms, where multiple agents repeatedly share a resource without monetary transfers. Previous work has shown that every agent can guarantee 1/2 of their ideal utility (the highest achievable utility given their fair share of resources) robustly, i. e. , under arbitrary behavior by the other agents. While this 1/2-robustness guarantee has now been established under very different mechanisms, including pseudo-markets and dynamic max-min allocation, improving on it has appeared difficult. In this work, we obtain the first significant improvement on the robustness of online resource sharing. In more detail, we consider the widely-studied repeated first-price auction with artificial currencies. Our main contribution is to show that a simple randomized bidding strategy can guarantee each agent a 2 - √2 ≈ 0. 59 fraction of her ideal utility, irrespective of others' bids. Specifically, our strategy requires each agent with fair share α to use a uniformly distributed bid whenever her value is in the top α-quantile of her value distribution. Our work almost closes the gap to the known 1 - 1/e ≈ 0. 63 hardness for robust resource sharing; we also show that any static (i. e. , budget independent) bidding policy cannot guarantee more than a 0. 6-fraction of the ideal utility, showing our technique is almost tight.

SODA Conference 2016 Conference Paper

Learning and Efficiency in Games with Dynamic Population

  • Thodoris Lykouris
  • Vasilis Syrgkanis
  • Éva Tardos

We study the quality of outcomes in repeated games when the population of players is dynamically changing, and where participants use learning algorithms to adapt to the dynamic environment. Price of anarchy has originally been introduced to study the Nash equilibria of one-shot games. Many games studied in computer science, such as packet routing or ad-auctions, are played repeatedly. Given the computational hardness of Nash equilibria, an attractive alternative in repeated game settings is that players use no-regret learning algorithms. The price of total anarchy considers the quality of such learning outcomes, assuming a steady environment and player population, which is rarely the case in online settings. In this paper we analyze efficiency of repeated games in dynamically changing environments. An important trait of learning behavior is its versatility to changing environments, assuming that the learning method used is adaptive, i. e. , doesn't rely too heavily on experience from the distant past. We show that, in large classes of games, if players choose their strategies in a way that guarantees low adaptive regret, high social welfare is ensured, even under very frequent changes. A main technical tool for our analysis is the existence of a solution to the welfare maximization problem that is both close to optimal and relatively stable over time. Such a solution serves as a benchmark in the efficiency analysis of learning outcomes. We show that such a stable and close to optimal solution exists for many problems, even in cases when the exact optimal solution can be very unstable. We further show that a sufficient condition on the existence of stable outcomes is the existence of a differentially private algorithm for the welfare maximization problem. Hence, we draw a strong connection between differential privacy and high efficiency of learning outcomes in frequently changing repeated games. We demonstrate our techniques by focusing on two classes of games as examples: independent item auctions and congestion games. In both applications we show that adaptive learning guarantees high social welfare even with surprisingly high churn in the player population.

STOC Conference 2013 Conference Paper

Composable and efficient mechanisms

  • Vasilis Syrgkanis
  • Éva Tardos

We initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. We define the class of smooth mechanisms, related to smooth games defined by Roughgarden, that can be thought of as mechanisms that generate approximately market clearing prices. We show that smooth mechanisms result in high quality outcome both in equilibrium and in learning outcomes in the full information setting, as well as in Bayesian equilibrium with uncertainty about participants. Our main result is to show that smooth mechanisms compose well: smoothness locally at each mechanism implies global efficiency. For mechanisms where good performance requires that bidders do not bid above their value, we identify the notion of a weakly smooth mechanism. Weakly smooth mechanisms, such as the Vickrey auction, are approximately efficient under the no-overbidding assumption, and the weak smoothness property is also maintained by composition. In most of the paper we assume participants have quasi-linear valuations. We also extend some of our results to settings where participants have budget constraints.

SODA Conference 2012 Conference Paper

Sequential auctions and externalities

  • Renato Paes Leme
  • Vasilis Syrgkanis
  • Éva Tardos

In many settings agents participate in multiple different auctions that are not necessarily implemented simultaneously. Future opportunities affect strategic considerations of the players in each auction, introducing externalities. Motivated by this consideration, we study a setting of a market of buyers and sellers, where each seller holds one item, bidders have combinatorial valuations and sellers hold item auctions sequentially. Our results are qualitatively different from those of simultaneous auctions, proving that simultaneity is a crucial aspect of previous work. We prove that if sellers hold sequential first price auctions then for unit-demand bidders (matching market) every subgame perfect equilibrium achieves at least half of the optimal social welfare, while for submodular bidders or when second price auctions are used, the social welfare can be arbitrarily worse than the optimal. We also show that a first price sequential auction for buying or selling a base of a matroid is always efficient, and implements the VCG outcome. An important tool in our analysis is studying first and second price auctions with externalities (bidders have valuations for each possible winner outcome), which can be of independent interest. We show that a Pure Nash Equilibrium always exists in a first price auction with externalities.

FOCS Conference 2011 Conference Paper

Which Networks are Least Susceptible to Cascading Failures?

  • Lawrence E. Blume
  • David A. Easley
  • Jon M. Kleinberg
  • Robert Kleinberg
  • Éva Tardos

The spread of a cascading failure through a network is an issue that comes up in many domains - in the contagious failures that spread among financial institutions during a financial crisis, through nodes of a power grid or communication network during a widespread outage, or through a human population during the outbreak of an epidemic disease. Here we study a natural model of threshold contagion: each node v is assigned a numerical threshold ℓ(v) drawn independently from an underlying distribution μ, and v will fail as soon as ℓ(v) of its neighbors fail. Despite the simplicity of the formulation, it has been very challenging to analyze the failure processes that arise from arbitrary threshold distributions; even qualitative questions concerning which graphs are the most resilient to cascading failures in these models have been difficult to resolve. Here we develop a set of new techniques for analyzing the failure probabilities of nodes in arbitrary graphs under this model, and we compare different graphs G according to their μ-risk, defined as the maximum failure probability of any node in G when thresholds are drawn from μ. We find that the space of threshold distributions has a surprisingly rich structure when we consider the risk that these thresholds induce on different graphs: small shifts in the distribution of the thresholds can favor graphs with a maximally clustered structure (i. e. , cliques), those with a maximally branching structure (trees), or even intermediate hybrids.

FOCS Conference 2010 Conference Paper

Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction

  • Renato Paes Leme
  • Éva Tardos

The Generalized Second Price Auction has been the main mechanism used by search companies to auction positions for advertisements on search pages. In this paper we study the social welfare of the Nash equilibria of this game in various models. In the full information setting, socially optimal Nash equilibria are known to exist (i. e. , the Price of Stability is 1). This paper is the first to prove bounds on the price of anarchy, and to give any bounds in the Bayesian setting. Our main result is to show that the price of anarchy is small assuming that all bidders play un-dominated strategies. In the full information setting we prove a bound of 1. 618 for the price of anarchy for pure Nash equilibria, and a bound of 4 for mixed Nash equilibria. We also prove a bound of 8 for the price of anarchy in the Bayesian setting, when valuations are drawn independently, and the valuation is known only to the bidder and only the distributions used are common knowledge. Our proof exhibits a combinatorial structure of Nash equilibria and uses this structure to bound the price of anarchy. While establishing the structure is simple in the case of pure and mixed Nash equilibria, the extension to the Bayesian setting requires the use of novel combinatorial techniques that can be of independent interest.

STOC Conference 2008 Conference Paper

Balanced outcomes in social exchange networks

  • Jon M. Kleinberg
  • Éva Tardos

The study of bargaining has a long history, but many basic settings are still rich with unresolved questions. In particular, consider a set of agents who engage in bargaining with one another,but instead of pairs of agents interacting in isolation,agents have the opportunity to choose whom they want to negotiate with, along the edges of a graph representing social-network relations. The area of network exchange theory in sociology has developed a large body of experimental evidence for the way in which people behave in such network-constrained bargaining situations, and it is a challenging problem to develop models that are both mathematically tractable and in general agreement with the results of these experiments. We analyze a natural theoretical model arising in network exchange theory, which can be viewed as a direct extension of the well-known Nash bargaining solution to the case of multiple agents interacting on a graph. While this generalized Nash bargaining solution is surprisingly effective at picking up even subtle differences in bargaining power that have been observed experimentally on small examples, it has remained an open question to characterize the values taken by this solution on general graphs, or to find an efficient means to compute it. Here we resolve these questions, characterizing the possible values of this bargaining solution, and giving an efficient algorithm to compute the set of possible values. Our result exploits connections to the structure of matchings in graphs, including decomposition theorems for graphs with perfect matchings, and also involves the development of new techniques. In particular, the values we are seeking turn out to correspond to a novel combinatorially defined point in the interior of a fractional relaxation of the matching problem.

FOCS Conference 2004 Conference Paper

The Price of Stability for Network Design with Fair Cost Allocation

  • Elliot Anshelevich
  • Anirban Dasgupta 0001
  • Jon M. Kleinberg
  • Éva Tardos
  • Tom Wexler
  • Tim Roughgarden

Network design is a fundamental problem for which it is important to understand the effects of strategic behavior. Given a collection of self-interested agents who want to form a network connecting certain endpoints, the set of stable solutions - the Nash equilibria - may look quite different from the centrally enforced optimum. We study the quality of the best Nash equilibrium, and refer to the ratio of its cost to the optimum network cost as the price of stability. The best Nash equilibrium solution has a natural meaning of stability in this context - it is the optimal solution that can be proposed from which no user will "defect". We consider the price of stability for network design with respect to one of the most widely-studied protocols for network cost allocation, in which the cost of each edge is divided equally between users whose connections make use of it; this fair-division scheme can be derived from the Shapley value, and has a number of basic economic motivations. We show that the price of stability for network design with respect to this fair cost allocation is O(log k), where k is the number of users, and that a good Nash equilibrium can be achieved via best-response dynamics in which users iteratively defect from a starting solution. This establishes that the fair cost allocation protocol is in fact a useful mechanism for inducing strategic behavior to form near-optimal equilibria. We discuss connections to the class of potential games defined by Monderer and Shapley, and extend our results to cases in which users are seeking to balance network design costs with latencies in the constructed network, with stronger results when the network has only delays and no construction costs. We also present bounds on the convergence time of best-response dynamics, and discuss extensions to a weighted game.

FOCS Conference 2003 Conference Paper

Group Strategyproof Mechanisms via Primal-Dual Algorithms

  • Martin Pál
  • Éva Tardos

We develop a general method for turning a primal-dual algorithm into a group strategy proof cost-sharing mechanism. We use our method to design approximately budget balanced cost sharing mechanisms for two NP-complete problems: metric facility location, and single source rent-or-buy network design. Both mechanisms are competitive, group strategyproof and recover a constant fraction of the cost. For the facility location game our cost-sharing method recovers a 1/3rd of the total cost, while in the network design game the cost shares pay for a 1/15 fraction of the cost of the solution.

STOC Conference 2003 Conference Paper

Near-optimal network design with selfish agents

  • Elliot Anshelevich
  • Anirban Dasgupta 0001
  • Éva Tardos
  • Tom Wexler

We introduce a simple network design game that models how independent selfish agents can build or maintain a large network. In our game every agent has a specific connectivity requirement, i.e. each agent has a set of terminals and wants to build a network in which his terminals are connected. Possible edges in the network have costs and each agent's goal is to pay as little as possible. Determining whether or not a Nash equilibrium exists in this game is NP-complete. However, when the goal of each player is to connect a terminal to a common source, we prove that there is a Nash equilibrium as cheap as the optimal network, and give a polynomial time algorithm to find a (1+ε) -approximate Nash equilibrium that does not cost much more. For the general connection game we prove that there is a 3-approximate Nash equilibrium that is as cheap as the optimal network, and give an algorithm to find a (4.65+ε) -approximate Nash equilibrium that does not cost much more.

FOCS Conference 2001 Conference Paper

Facility Location with Nonuniform Hard Capacities

  • Martin Pál
  • Éva Tardos
  • Tom Wexler

The authors give the first constant factor approximation algorithm for the facility location problem with nonuniform, hard capacities. Facility location problems have received a great deal of attention in recent years. Approximation algorithms have been developed for many variants. Most of these algorithms are based on linear programming, but the LP techniques developed thus far have been unsuccessful in dealing with hard capacities. A local-search based approximation algorithm (M. Korupolu et al. , 1998; F. A. Chudak and D. P. Williamson, 1999) is known for the special case of hard but uniform capacities. We present a local-search heuristic that yields an approximation guarantee of 9 + /spl epsi/ for the case of nonuniform hard capacities. To obtain this result, we introduce new operations that are natural in this context. Our proof is based on network flow techniques.

FOCS Conference 2001 Conference Paper

Truthful Mechanisms for One-Parameter Agents

  • Aaron Archer
  • Éva Tardos

The authors show how to design truthful (dominant strategy) mechanisms for several combinatorial problems where each agent's secret data is naturally expressed by a single positive real number. The goal of the mechanisms we consider is to allocate loads placed on the agents, and an agent's secret data is the cost she incurs per unit load. We give an exact characterization for the algorithms that can be used to design truthful mechanisms for such load balancing problems using appropriate side payments. We use our characterization to design polynomial time truthful mechanisms for several problems in combinatorial optimization to which the celebrated VCG mechanism does not apply. For scheduling related parallel machines (Q/spl par/C/sub max/), we give a 3-approximation mechanism based on randomized rounding of the optimal fractional solution. This problem is NP-complete, and the standard approximation algorithms (greedy load-balancing or the PTAS) cannot be used in truthful mechanisms. We show our mechanism to be frugal, in that the total payment needed is only a logarithmic factor more than the actual costs incurred by the machines, unless one machine dominates the total processing power. We also give truthful mechanisms for maximum flow, Q/spl par//spl Sigma/C/sub j/ (scheduling related machines to minimize the sum of completion times), optimizing an affine function over a fixed set, and special cases of uncapacitated facility location. In addition, for Q/spl par//spl Sigma/w/sub j/C/sub j/ (minimizing the weighted sum of completion times), we prove a lower bound of 2//spl radic/3 for the best approximation ratio achievable by truthful mechanism.

FOCS Conference 2000 Conference Paper

How Bad is Selfish Routing?

  • Tim Roughgarden
  • Éva Tardos

We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times-the total latency-is minimized. In many settings, including the Internet and other large-scale communication networks, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a "selfishly motivated" assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance. We quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and non-decreasing in the edge congestion.

FOCS Conference 1999 Conference Paper

Approximation Algorithms for Classification Problems with Pairwise Relationships: Metric Labeling and Markov Random Fields

  • Jon M. Kleinberg
  • Éva Tardos

In a traditional classification problem, we wish to assign one of k labels (or classes) to each of n objects, in a way that is consistent with some observed data that we have about the problem. An active line of research in this area is concerned with classification when one has information about pairwise relationships among the objects to be classified; this issue is one of the principal motivations for the framework of Markov random fields, and it arises in areas such as image processing, biometry: and document analysis. In its most basic form, this style of analysis seeks a classification that optimizes a combinatorial function consisting of assignment costs-based on the individual choice of label we make for each object-and separation costs-based on the pair of choices we make for two "related" objects. We formulate a general classification problem of this type, the metric labeling problem; we show that it contains as special cases a number of standard classification frameworks, including several arising from the theory of Markov random fields. From the perspective of combinatorial optimization, our problem can be viewed as a substantial generalization of the multiway cut problem, and equivalent to a type of uncapacitated quadratic assignment problem. We provide the first non-trivial polynomial-time approximation algorithms for a general family of classification problems of this type. Our main result is an O(log k log log k)-approximation algorithm for the metric labeling problem, with respect to an arbitrary metric on a set of k labels, and an arbitrary weighted graph of relationships on a set of objects. For the special case in which the labels are endowed with the uniform metric-all distances are the same-our methods provide a 2-approximation.

FOCS Conference 1999 Conference Paper

Fairness in Routing and Load Balancing

  • Jon M. Kleinberg
  • Yuval Rabani
  • Éva Tardos

We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria interacts in a complex fashion with the optimization of network utilization and throughput; in this work, we undertake an investigation of this relationship through the framework of approximation algorithms. In this work we consider the problem of selecting paths for routing so as to provide a bandwidth allocation that is as fair as possible (in the max-min sense). We obtain the first approximation algorithms for this basic optimization problem, for single-source unsplittable routings in an arbitrary directed graph. Special cases of our model include several fundamental load balancing problems, endowing them with a natural fairness criterion to which our approach can be applied. Our results form an interesting counterpart to the work of Megiddo (1974), who considered max-min fairness for single-source fractional flow. The optimization problems in our setting become NP-complete, and require the development of new techniques for relating fractional relaxations of routing to the equilibrium constraints imposed by the fairness criterion.

FOCS Conference 1995 Conference Paper

Disjoint Paths in Densely Embedded Graphs

  • Jon M. Kleinberg
  • Éva Tardos

We consider the following maximum disjoint paths problem (MDPP). We are given a large network, and pairs of nodes that wish to communicate over paths through the network-the goal is to simultaneously connect as many of these pairs as possible in such a way that no two communication paths share an edge in the network. This classical problem has been brought into focus recently in papers discussing applications to routing in high-speed networks, where the current lack of understanding of the MDPP is an obstacle to the design of practical heuristics. We consider the class of densely embedded, nearly-Eulerian graphs, which includes the two-dimensional mesh and other planar and locally planar interconnection networks. We obtain a constant-factor approximation algorithm for the maximum disjoint paths problem for this class of graphs; this improves on an O(log n)-approximation for the special case of the two-dimensional mesh due to Aumann-Rabani and the authors. For networks that are not explicitly required to be "high-capacity, " this is the first constant-factor approximation for the MDPP in any class of graphs other than trees. We also consider the MDPP in the on-line setting, relevant to applications in which connection requests arrive over time and must be processed immediately. Here we obtain an asymptptically optimal O(log n)competitive on-line algorithm for the same class of graphs; this improves on an O(log n log log n) competitive algorithm for the special case of the mesh due to B. Awerbuch et al (1994).

FOCS Conference 1991 Conference Paper

Fast Approximation Algorithms for Fractional Packing and Covering Problems

  • Serge A. Plotkin
  • David B. Shmoys
  • Éva Tardos

Fast algorithms that find approximate solutions for a general class of problems, which are called fractional packing and covering problems, are presented. The only previously known algorithms for solving these problems are based on general linear programming techniques. The techniques developed greatly outperform the general methods in many applications, and are extensions of a method previously applied to find approximate solutions to multicommodity flow problems. The algorithms are based on a Lagrangian relaxation technique, and an important result is a theoretical analysis of the running time of a Lagrangian relaxation based algorithm. Several applications of the algorithms are presented. >

FOCS Conference 1989 Conference Paper

Interior-Point Methods in Parallel Computation

  • Andrew V. Goldberg
  • Serge A. Plotkin
  • David B. Shmoys
  • Éva Tardos

Interior-point methods for linear programming, developed in the context of sequential computation, are used to obtain a parallel algorithm for the bipartite matching problem. The algorithm runs in O*( square root m) time. The results extend to the weighted bipartite matching problem and to the zero-one minimum-cost flow problem, yielding O*( square root m log C) algorithms. This improves previous bounds on these problems and illustrates the importance of interior-point methods in parallel algorithm design. >

FOCS Conference 1988 Conference Paper

Combinatorial Algorithms for the Generalized Circulation Problem

  • Andrew V. Goldberg
  • Serge A. Plotkin
  • Éva Tardos

A generalization of the maximum-flow problem is considered in which the amounts of flow entering and leaving an arc are linearly related. More precisely, if x(e) units of flow enter an arc e, x(e) lambda (e) units arrive at the other end. For instance, nodes of the graph can correspond to different currencies, with the multipliers being the exchange rates. Conservation of flow is required at every node except a given source node. The goal is to maximize the amount of flow excess at the source. This problem is a special case of linear programming, and therefore can be solved in polynomial time. The authors present polynomial-time combinatorial algorithms for this problem. The algorithms are simple and intuitive. >

FOCS Conference 1987 Conference Paper

Approximation Algorithms for Scheduling Unrelated Parallel Machines

  • Jan Karel Lenstra
  • David B. Shmoys
  • Éva Tardos

We consider the following scheduling problem. There are m parallel machines and n independent jobs. Each job is to be assigned to one of the machines. The processing of job j on machine i requires time pij. The objective is to find a schedule that minimizes the makespan. Our main result is a polynomial algorithm which constructs a schedule that is guaranteed to be no longer than twice the optimum. We also present a polynomial approximation scheme for the case that the number of machines is fixed. Both approximation results are corollaries of a theorem about the relationship of a class of integer programming problems and their linear programming relaxations. In particular, we give a polynomial method to round the fractional extreme points of the linear program to integral points that nearly satisfy the constraints. In contrast to our main result, we prove that no polynomial algorithm can achieve a worst-case ratio less than 3/2 unless P = NP. We finally obtain a complexity classification for all special cases with a fixed number of processing times.

FOCS Conference 1986 Conference Paper

An O(n^2 (m + n log n) log n) Min-Cost Flow Algorithm

  • Zvi Galil
  • Éva Tardos

The minimum-cost flow problem is the following: given a network with n vertices and m edges, find a maximum flow of minimum cost. Many network problems are easily reducible to this problem. A polynomial-time algorithm for the problem has been known for some time [EK], but only recently a strongly polynomial algorithm was discovered [Ts]. In this paper we design an O(n2(m + n log n)log n) algorithm. The previous best algorithm had an O(m2 (m + n log n) log n) time bound ([F], [O]). Thus, we obtain an improvement of two orders of magnitude for dense graphs. Our algorithm is based on Fujishige's algorithm [F] (which is based on Tardos' algorithm [Ts]). Fujishige's algorithm consists of up to O(m log n) steps. Each step solves a single source shortest path problem with nonnegative edge lengths. We modify this algorithm in order to make an improved analysis possible. The new algorithm may still consist of up to m iterations, and an iteration may still consist of up to O(m log n) steps, but we can still show that the total number of steps is bounded by O(n2 log n). The improvement is due to a new technique that relates the time spent to the progress achieved.

FOCS Conference 1985 Conference Paper

An Application of Simultaneous Approximation in Combinatorial Optimization

  • András Frank
  • Éva Tardos

We present a preprocessing algorithm to make certain polynomial algorithms strongly polynomial. The running time of some of the known combinatorial optimization algorithms depends on the size of the objective function w. Our preprocessing algorithm replaces w by an integral valued w whose size is polynomially bounded in the size of the combinatorial structure and which yields the same set of optimal solutions as w. As applications we show how existing polynomial algorithms for finding the maximum weight clique in a perfect graph and for the minimum cost submodular flow problem can be made strongly polynomial. The method relies on Lovász's simultaneous approximation algorithm.