Arrow Research search

Author name cluster

Berthold Vöcking

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

30 papers
2 author rows

Possible papers

30

STOC Conference 2014 Conference Paper

Primal beats dual on online packing LPs in the random-order model

  • Thomas Kesselheim
  • Klaus Radke
  • Andreas Tönnis
  • Berthold Vöcking

We study packing LPs in an online model where the columns are presented to the algorithm in random order. This natural problem was investigated in various recent studies motivated, e.g., by online ad allocations and yield management where rows correspond to resources and columns to requests specifying demands for resources. Our main contribution is a 1 -- O (√(log d/B))-competitive online algorithm. Here d denotes the column sparsity , i.e., the maximum number of resources that occur in a single column, and B denotes the capacity ratio B , i.e., the ratio between the capacity of a resource and the maximum demand for this resource. In other words, we achieve a (1--ε)-approximation if the capacity ratio satisfies B=Ω(logd/ε 2 ), which is known to be best-possible for any (randomized) online algorithms. Our result improves exponentially on previous work with respect to the capacity ratio. In contrast to existing results on packing LP problems, our algorithm does not use dual prices to guide the allocation of resources over time. Instead, the algorithm simply solves, for each request, a scaled version of the partially known primal program and randomly rounds the obtained fractional solution to obtain an integral allocation for this request. We show that this simple algorithmic technique is not restricted to packing LPs with large capacity ratio of order Ω(log d ), but it also yields close-to-optimal competitive ratios if the capacity ratio is bounded by a constant. In particular, we prove an upper bound on the competitive ratio of Ω( d --1/( B --1) ), for any B ≥ 2. In addition, we show that our approach can be combined with VCG payments and obtain an incentive compatible (1 -- ε)-competitive mechanism for packing LPs with B = Ω(log m /ε; 2 ), where m is the number of constraints. Finally, we apply our technique to the generalized assignment problem for which we obtain the first online algorithm with competitive ratio O (1).

FOCS Conference 2006 Conference Paper

On the Impact of Combinatorial Structure on Congestion Games

  • Heiner Ackermann
  • Heiko Röglin
  • Berthold Vöcking

We study the impact of combinatorial structure in congestion games on the complexity of computing pure Nash equilibria and the convergence time of best response sequences. In particular, we investigate which properties of the strategy spaces of individual players ensure a polynomial convergence time. We show, if the strategy space of each player consists of the bases of a matroid over the set of resources, then the lengths of all best response sequences are polynomially bounded in the number of players and resources. We can also prove that this result is tight, that is, the matroid property is a necessary and sufficient condition on the players' strategy spaces for guaranteeing polynomial time convergence to a Nash equilibrium. In addition, we present an approach that enables us to devise hardness proofs for various kinds of combinatorial games, including first results about the hardness of market sharing games and congestion games for overlay network design. Our approach also yields a short proof for the PLS-completeness of network congestion games. In particular, we can show that network congestion games are PLS-complete for directed and undirected networks even in case of linear latency functions

STOC Conference 2005 Conference Paper

Approximation techniques for utilitarian mechanism design

  • Patrick Briest
  • Piotr Krysta
  • Berthold Vöcking

This paper deals with the design of efficiently computable incentive compatible, or truthful, mechanisms for combinatorial optimization problems with multi-parameter agents. We focus on approximation algorithms for NP-hard mechanism design problems. These algorithms need to satisfy certain monotonicity properties to ensure truthfulness. Since most of the known approximation techniques do not fulfill these properties, we study alternative techniques.Our first contribution is a quite general method to transform a pseudopolynomial algorithm into a monotone FPTAS. This can be applied to various problems like, e.g., knapsack , constrained shortest path , or job scheduling with deadlines . For example, the monotone FPTAS for the knapsack problem gives a very efficient, truthful mechanism for single-minded multi-unit auctions . The best previous result for such auctions was a 2-approximation. In addition, we present a monotone PTAS for the generalized assignment problem with any bounded number of parameters per agent.The most efficient way to solve packing integer programs (PIPs) is LP-based randomized rounding, which also is in general not monotone. We show that primal-dual greedy algorithms achieve almost the same approximation ratios for PIPs as randomized rounding. The advantage is that these algorithms are inherently monotone. This way, we can significantly improve the approximation ratios of truthful mechanisms for various fundamental mechanism design problems like single-minded combinatorial auctions (CAs) , unsplittable flow routing and multicast routing . Our approximation algorithms can also be used for the winner determination in CAs with general bidders specifying their bids through an oracle.

STOC Conference 2004 Conference Paper

Typical properties of winners and losers in discrete optimization

  • René Beier
  • Berthold Vöcking

We present a probabilistic analysis for a large class of combinatorial optimization problems containing, e. g., all binary optimization problems defined by linear constraints and a linear objective function over (0,1) n . By parameterizing which constraints are of stochastic and which are of adversarial nature, we obtain a semi-random input model that enables us to do a general average-case analysis for a large class of optimization problems while at the same time taking care for the combinatorial structure of individual problems. Our analysis covers various probability distributions for the choice of the stochastic numbers and includes smoothed analysis with Gaussian and other kinds of perturbation models as a special case. In fact, we can exactly characterize the smoothed complexity of optimization problems in terms of their random worst-case complexity.A binary optimization problem has a polynomial smoothed complexity if and only if it has a pseudopolynomial complexity. Our analysis is centered around structural properties of binary optimization problems, called winner , loser , and feasibility gaps . We show, when the coefficients of the objective function and/or some of the constraints are stochastic, then there usually exist a polynomial n -Ω(1) gap between the best and the second best solution as well as a polynomial slack to the boundary of the constraints. Similar to the condition number for linear programming, these gaps describe the sensitivity of the optimal solution to slight perturbations of the input and can be used to bound the necessary accuracy as well as the complexity for solving an instance. We exploit the gaps in form of an adaptive rounding scheme increasing the accuracy of calculation until the optimal solution is found. The strength of our techniques is illustrated by applications to various NP-hard optimization problems from mathematical programming, network design, and scheduling for which we obtain the the first algorithms with polynomial average-case/smoothed complexity.

MFCS Conference 2003 Conference Paper

Scheduling and Traffic Allocation for Tasks with Bounded Splittability

  • Piotr Krysta
  • Peter Sanders 0001
  • Berthold Vöcking

Abstract We investigate variants of the problem of scheduling tasks on uniformly related machines to minimize the makespan. In the k -splittable scheduling problem each task can be broken into at most k ≥ 2 pieces to be assigned to different machines. In a more general SAC problem each task j has its own splittability parameter k j ≥ 2. These problems are NP -hard and previous research focuses mainly on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. We show that the traffic allocation problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Our main result is a polynomial time, exact algorithm for the k -splittable scheduling problem as well as the SAC problem with a fixed number of machines. The running time of our algorithm is exponential in the number of machines but is only linear in the number of tasks. This result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be NP -hard already for two machines. Furthermore, since our algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem.

FOCS Conference 2000 Conference Paper

Randomized Rumor Spreading

  • Richard M. Karp
  • Christian Schindelhauer
  • Scott Shenker
  • Berthold Vöcking

Investigates the class of epidemic algorithms that are commonly used for the lazy transmission of updates to distributed copies of a database. These algorithms use a simple randomized communication mechanism to ensure robustness. Suppose n players communicate in parallel rounds in each of which every player calls a randomly selected communication partner. In every round, players can generate rumors (updates) that are to be distributed among all players. Whenever communication is established between two players, each one must decide which of the rumors to transmit. The major problem is that players might not know which rumors their partners have already received. For example, a standard algorithm forwarding each rumor form the calling to the called players for /spl Theta/(ln n) rounds needs to transmit the rumor /spl Theta/(n ln n) times in order to ensure that every player finally receives the rumor with high probability. We investigate whether such a large communication overhead is inherent to epidemic algorithms. On the positive side, we show that the communication overhead can be reduced significantly. We give an algorithm using only O(n ln ln n) transmissions and O(ln n) rounds. In addition, we prove the robustness of this algorithm. On the negative side, we show that any address-oblivious algorithm needs to send /spl Omega/(n ln ln n) messages for each rumor, regardless of the number of rounds. Furthermore, we give a general lower bound showing that time and communication optimality cannot be achieved simultaneously using random phone calls, i. e. every algorithm that distributes a rumor in O(ln n) rounds needs /spl omega/(n) transmissions.

FOCS Conference 1999 Conference Paper

How Asymmetry Helps Load Balancing

  • Berthold Vöcking

This paper deals with balls and bins processes related to randomized load balancing, dynamic resource allocation and hashing. Suppose n balls have to be assigned to n bins, where each ball has to be placed without knowledge about the distribution of previously placed balls. The goal is to achieve an allocation that is as even as possible so that no bin gets much more balls than the average. A well known and good solution for this problem is to choose d possible locations for each ball at random, to look into each of these bins, and to place the ball into the least full among these bins. This class of algorithms has been investigated intensively in the past but almost all previous analyses assume that the d locations for each ball are chosen uniform and independently at random from the set of all bins. We investigate whether a non-uniform and possibly dependent choice of the d locations for a ball can improve the load balancing. Three types of selections are distinguished: 1) uniform and independent 2) non-uniform and independent 3) non-uniform and dependent. Our first result shows that choosing the locations in a non-uniform way (type 2) results in a better load balancing than choosing the locations uniformly (type 1). Surprising, this smooth load balancing is obtained by an algorithm called "Always-Go-Left" which creates an asymmetric assignment of the balls to the bins. Our second result is a lower bound on the smallest-possible maximum load that can be achieved by any allocation algorithm of type 1, 2, or 3.

FOCS Conference 1997 Conference Paper

Exploiting Locality for Data Management in Systems of Limited Bandwidth

  • Bruce M. Maggs
  • Friedhelm Meyer auf der Heide
  • Berthold Vöcking
  • Matthias Westermann

This paper deals with data management in computer systems in which the computing nodes are connected by a relatively sparse network. We consider the problem of placing and accessing a set of shared objects that are read and written from the nodes in the network. These objects are, e. g. , global variables in a parallel program, pages or cache lines in a virtual shared memory system, shared files in a distributed file system, or pages in the World Wide Web. A data management strategy consists of a placement strategy that maps the objects (possibly dynamically and with redundancy) to the nodes, and an access strategy that describes how reads and writes are handled by the system (including the routing). We investigate static and dynamic data management strategies.