Arrow Research search

Author name cluster

Yossi Azar

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

69 papers
2 author rows

Possible papers

69

AAAI Conference 2025 Conference Paper

List Update with Prediction

  • Yossi Azar
  • Shahar Lewkowicz
  • Varun Suriyanarayana

List Update is a fundamental problem in online algorithms, with a well-known 2-competitive algorithm that moves every requested element to the front. Randomization can slightly improve the competitive ratio to 1.6, but not beyond 1.5. However, practical inputs are not adversarial and one hopes to do better, particularly when additional information from a machine learning oracle is available. With access to predictions, the goal is to incur only a slight overhead compared to the prediction's accuracy, avoiding significant costs in case of substantial deviation. We propose a (1+epsilon)-smooth randomized algorithm, offering robustness of O(1/epsilon^4). This guarantees that the algorithm never exceeds a cost greater than 1+epsilon times the prediction cost, while maintaining a bound within O(1/epsilon^4) of the optimal cost for every possible sequence. In cases where no paid swaps are permitted for the prediction, we can improve robustness to O(1/epsilon^2) while retaining 1+epsilon smoothness. We complement these findings by demonstrating a lower bound of 1/epsilon on the robustness for deterministic algorithms and log(1/epsilon) for randomized ones. Finally, the experiments we have made show that our algorithms perform better than the standard competitive algorithms for this problem

NeurIPS Conference 2023 Conference Paper

Discrete-Smoothness in Online Algorithms with Predictions

  • Yossi Azar
  • Debmalya Panigrahi
  • Noam Touitou

In recent years, there has been an increasing focus on designing online algorithms with (machine-learned) predictions. The ideal learning-augmented algorithm is comparable to the optimum when given perfect predictions (consistency), to the best online approximation for arbitrary predictions (robustness), and should interpolate between these extremes as a smooth function of the prediction error. In this paper, we quantify these guarantees in terms of a general property that we call discrete-smoothness, and achieve discrete-smooth algorithms for online covering, specifically the facility location and set cover problems. For set cover, our work improves the results of Bamas, Maggiori, and Svensson (2020) by augmenting consistency and robustness with smoothness guarantees. For facility location, our work improves on prior work by Almanza et al. (2021) by generalizing to nonuniform costs and also providing smoothness guarantees by augmenting consistency and robustness.

NeurIPS Conference 2022 Conference Paper

An $\alpha$-regret analysis of Adversarial Bilateral Trade

  • Yossi Azar
  • Amos Fiat
  • Federico Fusco

We study sequential bilateral trade where sellers and buyers valuations are completely arbitrary ({\sl i. e. }, determined by an adversary). Sellers and buyers are strategic agents with private valuations for the good and the goal is to design a mechanism that maximizes efficiency (or gain from trade) while being incentive compatible, individually rational and budget balanced. In this paper we consider gain from trade which is harder to approximate than social welfare. We consider a variety of feedback scenarios and distinguish the cases where the mechanism posts one price and when it can post different prices for buyer and seller. We show several surprising results about the separation between the different scenarios. In particular we show that (a) it is impossible to achieve sublinear $\alpha$-regret for any $\alpha<2$, (b) but with full feedback sublinear $2$-regret is achievable (c) with a single price and partial feedback one cannot get sublinear $\alpha$ regret for any constant $\alpha$ (d) nevertheless, posting two prices even with one-bit feedback achieves sublinear $2$-regret, and (e) there is a provable separation in the $2$-regret bounds between full and partial feedback.

FOCS Conference 2020 Conference Paper

Beyond Tree Embeddings - a Deterministic Framework for Network Design with Deadlines or Delay

  • Yossi Azar
  • Noam Touitou

We consider network design problems with deadline or delay. All previous results for these models are based on randomized embedding of the graph into a tree (HST) and then solving the problem on this tree. We show that this is not necessary. In particular, we design a deterministic framework for these problems which is not based on embedding. This enables us to provide deterministic poly-log( $n$ )-competitive algorithms for Steiner tree, generalized Steiner tree, node weighted Steiner tree, (non-uniform) facility location and directed Steiner tree with deadlines or with delay (where $n$ is the number of nodes). Our deterministic algorithms also give improved guarantees over some previous randomized results. In addition, we show a lower bound of poly $\text{log}(n)$ for some of these problems, which implies that our framework is optimal up to the power of the poly-log. Our algorithms and techniques differ significantly from those in all previous considerations of these problems.

AAMAS Conference 2019 Conference Paper

Efficient Allocation of Free Stuff

  • Yossi Azar
  • Allan Borodin
  • Michal Feldman
  • Amos Fiat
  • Kineret Segal

We study online matching settings with selfish agents when everything is free. Inconsiderate agents break ties arbitrarily amongst equal maximal value available choices, even if the maximal value is equal to zero. Even for the simplest case of zero/one valuations, where agents arrive online in an arbitrary order, and agents are restricted to taking at most one item, the resulting social welfare may be negligible for a deterministic algorithm. This may be surprising when contrasted with the 1/2 approximation of the greedy algorithm, analogous to this setting, except that agents are considerate (i. e. , they don’t take zero-valued items). We overcome this challenge by introducing a new class of algorithms, which we refer to as prioritization algorithms. We show that upgrading a random subset of the agents to “business class" already improves the approximation to a constant. For more general valuations, we achieve a constant approximation using logn priority classes, when the valuations are known in advance. We extend these results to settings where agents have additive valuations and are restricted to taking up to some q ≥ 1 items. Our results are tight up to a constant.

FOCS Conference 2019 Conference Paper

General Framework for Metric Optimization Problems with Delay or with Deadlines

  • Yossi Azar
  • Noam Touitou

In this paper, we present a framework used to construct and analyze algorithms for online optimization problems with deadlines or with delay over a metric space. Using this framework, we present algorithms for several different problems. We present an O(D ^ 2) -competitive deterministic algorithm for online multilevel aggregation with delay on a tree of depth D, an exponential improvement over the O(D 4 2 D ) -competitive algorithm of Bienkowski et al. (ESA '16), where the only previously-known improvement was for the special case of deadlines by Buchbinder et al. (SODA '17). We also present an O(log 2 n) -competitive randomized algorithm for online service with delay over any general metric space of n points, improving upon the O(log 4 n) -competitive algorithm by Azar et al. (STOC '17). In addition, we present the problem of online facility location with deadlines. In this problem, requests arrive over time in a metric space, and need to be served until their deadlines by facilities that are opened momentarily for some cost. We also consider the problem of facility location with delay, in which the deadlines are replaced with arbitrary delay functions. For those problems, we present O(log 2 n) -competitive algorithms, with n the number of points in the metric space. The algorithmic framework we present includes techniques for the design of algorithms as well as techniques for their analysis.

FOCS Conference 2018 Conference Paper

Improved Online Algorithm for Weighted Flow Time

  • Yossi Azar
  • Noam Touitou

We discuss one of the most fundamental scheduling problem of processing jobs on a single machine to minimize the weighted flow time (weighted response time). Our main result is a O(log P)-competitive algorithm, where P is the maximum-to-minimum processing time ratio, improving upon the O(log 2 P)competitive algorithm of Chekuri, Khanna and Zhu (STOC 2001). We also design a O(log D)-competitive algorithm, where D is the maximum-to-minimum density ratio of jobs. Finally, we show how to combine these results with the result of Bansal and Dhamdhere (SODA 2003) to achieve a O(log(min(P, D, W)))competitive algorithm (where W is the maximum-tominimum weight ratio), without knowing P, D, W in advance. As shown by Bansal and Chan (SODA 2009), no constant-competitive algorithm is achievable for this problem.

STOC Conference 2017 Conference Paper

Online service with delay

  • Yossi Azar
  • Arun Ganesh
  • Rong Ge 0001
  • Debmalya Panigrahi

In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm . The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems.

SODA Conference 2017 Conference Paper

Polylogarithmic Bounds on the Competitiveness of Min-cost Perfect Matching with Delays

  • Yossi Azar
  • Ashish Chiplunkar
  • Haim Kaplan

We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) recently introduced by Emek et al, (STOC 2016). This problem is defined on an underlying n -point metric space. An adversary presents real-time requests online at points of the metric space, and the algorithm is required to match them, possibly after keeping them waiting for some time. The cost incurred is the sum of the distances between matched pairs of requests (the connection cost), and the sum of the waiting times of the requests (the delay cost). We prove the first logarithmic upper bound and the first polylogarithmic lower bound on the randomized competitive ratio of this problem. We present an algorithm with a competitive ratio of O (log n ), which improves the upper bound of O log 2 n + logΔ) of Emek et al, by removing the dependence on Δ, the aspect ratio of the metric space (which can be unbounded as a function of n ). The core of our algorithm is a deterministic algorithm for MPMD on metrics induced by edge-weighted trees of height h, whose cost is guaranteed to be at most O (1) times the connection cost plus O (h) times the delay cost of every feasible solution. The reduction from MPMD on arbitrary metrics to MPMD on trees is achieved using the result on embedding n -point metric spaces into distributions over weighted hierarchically separated trees of height O (log n ), with distortion O (log n ). We also prove a lower bound of on the competitive ratio of any randomized algorithm. This is the first lower bound which increases with n, and is attained on the metric of n equally spaced points on a line.

SODA Conference 2016 Conference Paper

Make-to-Order Integrated Scheduling and Distribution

  • Yossi Azar
  • Amir Epstein
  • Lukasz Jez
  • Adi Vardi

Production and distribution are fundamental operational functions in supply chains. The main challenge is to design algorithms that optimize operational performance by jointly scheduling production and delivery of customer orders. In this paper we study a model of scheduling customer orders on multiple identical machines and their distribution to customers afterwards. The goal is to minimize the total time from release to distribution plus total distribution cost to the customers. We design the first poly-logarithmic competitive algorithm for the problem, improving upon previous algorithms with linear competitive ratios. Our model generalizes two fundamental problems: scheduling of jobs on multiple identical machines (where the goal function is to minimize the total flow time) as well as the TCP Acknowledgment problem.

FOCS Conference 2016 Conference Paper

Online Algorithms for Covering and Packing Problems with Convex Objectives

  • Yossi Azar
  • Niv Buchbinder
  • T. -H. Hubert Chan
  • Shahar Chen
  • Ilan Reuven Cohen
  • Anupam Gupta 0001
  • Zhiyi Huang 0002
  • Ning Kang 0001

We present online algorithms for covering and packing problems with (non-linear) convex objectives. The convex covering problem is defined as: min xϵ R + n f(x) s. t. Ax ≥ 1, where f: R + n → R + is a monotone convex function, and A is an m×n matrix with non-negative entries. In the online version, a new row of the constraint matrix, representing a new covering constraint, is revealed in each step and the algorithm is required to maintain a feasible and monotonically non-decreasing assignment x over time. We also consider a convex packing problem defined as: max yϵR+ m Σ j=1 m yj - g(A T y), where g: R + n →R + is a monotone convex function. In the online version, each variable yj arrives online and the algorithm must decide the value of yj on its arrival. This represents the Fenchel dual of the convex covering program, when g is the convex conjugate of f. We use a primal-dual approach to give online algorithms for these generic problems, and use them to simplify, unify, and improve upon previous results for several applications.

SODA Conference 2016 Conference Paper

Packing Small Vectors

  • Yossi Azar
  • Ilan Reuven Cohen
  • Amos Fiat
  • Alan Roytman

Online d -dimensional vector packing models many settings such as minimizing resources in data centers where jobs have multiple resource requirements (CPU, Memory, etc.). However, no online d -dimensional vector packing algorithm can achieve a competitive ratio better than d. Fortunately, in many natural applications, vectors are relatively small, and thus the lower bound does not hold. For sufficiently small vectors, an O (log d )-competitive algorithm was known. We improve this to a constant competitive ratio, arbitrarily close to e ≈ 2. 718, given that vectors are sufficiently small. We give improved results for the two dimensional case. For arbitrarily small vectors, the First Fit algorithm for two dimensional vector packing is no better than 2-competitive. We present a natural family of First Fit variants, and for optimized parameters get a competitive ratio ≈ 1. 48 for sufficiently small vectors. We improve upon the 1. 48 competitive ratio – not via a First Fit variant – and give a competitive ratio arbitrarily close to 4/3 for packing small, two dimensional vectors. We show that no algorithm can achieve better than a 4/3 competitive ratio for two dimensional vectors, even if one allows the algorithm to split vectors among arbitrarily many bins.

AAMAS Conference 2012 Conference Paper

Mastering multi-player games

  • Yossi Azar
  • Uriel Feige
  • Michal Feldman
  • Moshe Tennenholtz

We consider multi-player games, and the guarantees that a master player that plays on behalf of a set of players can offer them, without making any assumptions on the rationality of the other players. Our model consists of an $(n+1)$-player game, with $m$ strategies per player, in which a \emph{master} player $M$ forms a coalition with nontransferable utilities among $n$ players, and the remaining player is called the {\em independent} player. Existentially, it is shown that every game admits a \emph{product-minimax-safe} strategy for $M$ -- a strategy that guarantees for every player in $M$'s coalition an expected value of at least her \emph{product minimax value} (which is at least as high as her minimax value and is often higher). Algorithmically, for any given vector of values for the players, one can decide in polytime whether it can be ensured by $M$, and if so, compute a mixed strategy that guarantees it. In symmetric games, a product minimax strategy for $M$ can be computed efficiently, even without being given the safety vector. We also consider the performance guarantees that $M$ can offer his players in repeated settings. Our main result here is the extension of the oblivious setting of Feldman, Kalai and Tennenholtz, showing that in every symmetric game, a master player who never observes a single payoff can guarantee for each of its players a {\em similar} performance to that of the independent player, even if the latter gets to choose the payoff matrix after the fact.

FOCS Conference 2009 Conference Paper

Convergence of Local Dynamics to Balanced Outcomes in Exchange Networks

  • Yossi Azar
  • Benjamin E. Birnbaum
  • L. Elisa Celis
  • Nikhil R. Devanur
  • Yuval Peres

Bargaining games on exchange networks have been studied by both economists and sociologists. A Balanced Outcome for such a game is an equilibrium concept that combines notions of stability and fairness. In a recent paper, Kleinberg and Tardos introduced balanced outcomes to the computer science community and provided a polynomial-time algorithm to compute the set of such outcomes. Their work left open a pertinent question: are there natural, local dynamics that converge quickly to a balanced outcome? In this paper, we provide a partial answer to this question by showing that simple edge-balancing dynamics converge to a balanced outcome whenever one exists.

STOC Conference 2009 Conference Paper

Multiple intents re-ranking

  • Yossi Azar
  • Iftah Gamzu
  • Xiaoxin Yin

One of the most fundamental problems in web search is how to re-rank result web pages based on user logs. Most traditional models for re-ranking assume each query has a single intent. That is, they assume all users formulating the same query have similar preferences over the result web pages. It is clear that this is not true for a large portion of queries as different users may have different preferences over the result web pages. Accordingly, a more accurate model should assume that queries have multiple intents. In this paper, we introduce the multiple intents re-ranking problem. This problem captures scenarios in which some user makes a query, and there is no information about its real search intent. In such cases, one would like to re-rank the search results in a way that minimizes the efforts of all users in finding their relevant web pages. More formally, the setting of this problem consists of various types of users, each of which interested in some subset of the search results. Moreover, each user type has a non-negative profile vector. Consider some ordering of the search results. This order sets a position for each search result, and induces a position vector of the results relevant to each user type. The overhead of a user type is the dot product of its profile vector and its induced position vector. The goal is to order the search results as to minimize the average overhead of the users. Our main result is an O(log r)-approximation algorithm for the problem, where r is the maximum number of search results that are relevant to any user type. The algorithm is based on a new technique, which we call harmonic interpolation. In addition, we consider two important special cases. The first case is when the profile vector of each user type is non-increasing. This case is a generalization of the well-known min-sum set cover problem. We extend the techniques of Feige, Lovasz and Tetali (Algorithmica '04), and present an algorithm achieving 4-approximation. The second case is when the profile vector of each user type is non-decreasing. This case generalizes the minimum latency set cover problem, introduced by Hassin and Levin (ESA '05). We devise an LP-based algorithm that attains 2-approximation for it.

STOC Conference 2005 Conference Paper

Convex programming for scheduling unrelated parallel machines

  • Yossi Azar
  • Amir Epstein

We consider the classical problem of scheduling parallel unrelated machines. Each job is to be processed by exactly one machine. Processing job j on machine i requires time p ij . The goal is to find a schedule that minimizes the l p norm. Previous work showed a 2-approximation algorithm for the problem with respect to the l ∞ norm. For any fixed l p norm the previously known approximation algorithm has a performance of θ(p). We provide a 2-approximation algorithm for any fixed l p norm (p>1). This algorithm uses convex programming relaxation. We also give a √ 2-approximation algorithm for the l 2 norm. This algorithm relies on convex quadratic programming relaxation. To the best of our knowledge, this is the first time that general convex programming techniques (apart from SDPs and CQPs) are used in the area of scheduling. We show for any given l p norm a PTAS for any fixed number of machines. We also consider the multidimensional generalization of the problem in which the jobs are d-dimensional. Here the goal is to minimize the l p norm of the generalized load vector, which is a matrix where the rows represent the machines and the columns represent the jobs dimension. For this problem we give a (d+1)-approximation algorithm for any fixed l p norm (p>1).

STOC Conference 2005 Conference Paper

The Price of Routing Unsplittable Flow

  • Baruch Awerbuch
  • Yossi Azar
  • Amir Epstein

The essence of the routing problem in real networks is that the traffic demand from a source to destination must be satisfied by choosing a single path between source and destination. The splittable version of this problem is when demand can be satisfied by many paths, namely a flow from source to destination. The unsplittable, or discrete version of the problem is more realistic yet is more complex from the algorithmic point of view; in some settings optimizing such unsplittable traffic flow is computationally intractable.In this paper, we assume this more realistic unsplittable model, and investigate the "price of anarchy", or deterioration of network performance measured in total traffic latency under the selfish user behavior. We show that for linear edge latency functions the price of anarchy is exactly $2.618 for weighted demand and exactly $2.5 for unweighted demand. These results are easily extended to (weighted or unweighted) atomic "congestion games", where paths are replaced by general subsets. We also show that for polynomials of degree d edge latency functions the price of anarchy is d δ (d). Our results hold also for mixed strategies.Previous results of Roughgarden and Tardos showed that for linear edge latency functions the price of anarchy is exactly 4/3 under the assumption that each user controls only a negligible fraction of the overall traffic (this result also holds for the splittable case). Note that under the assumption of negligible traffic pure and mixed strategies are equivalent and also splittable and unsplittable models are equivalent.

STOC Conference 2003 Conference Paper

Reducing truth-telling online mechanisms to online optimization

  • Baruch Awerbuch
  • Yossi Azar
  • Adam Meyerson

We describe a general technique for converting an online algorithm Β to a truthtelling mechanism. We require that the original online competitive algorithm has certain "niceness" properties in that actions on future requests are independent of the actual value of requests which were accepted (though these actions will of course depend upon the set of accepted requests). Under these conditions, we are able to give an online truthtelling mechanism (where the values of requests are given by bids which may not accurately represent the valuation of the requesters) such that our total profit is within O(ρ + log μ) of the optimum offline profit obtained by an omniscient algorithm (one which knows the true valuations of the users). Here ρ is the competitive ratio of Β for the optimization version of the problem, and μ is the ratio of the maximum to minimum valuation for a request. In general there is an Ω(log μ) lower bound on the ratio of worst-case profit for a truthtelling mechanism when compared to the profit obtained by an omniscient algorithm, so this result is in some sense best possible. In addition, we prove that our construction is resilient against many forms of "cheating" attempts, such as forming coalitions.We demonstrate applications of this result to several problems. We develop online truthtelling mechanisms for online routing and admission control of path or multicast requests, assuming large network capacities. Assuming the existance of an algorithm Β for the optimization version of the problem, our techniques provide truthtelling mechanisms for general combinatorial auctions. However, designing optimization algorithms may be difficult in general because of online or approximation lower bounds. For the cases described above, we are able to design optimization algorithms Β by amortizing the lost benefit from online computation (and from approximation hardness in the case of multicast) against the benefit obtained from accepted requests.We comment that our upper bounds on profit competitiveness imply, as an obvious corollary, similar bound on global efficiency , namely overall well-being of all the users. This contrasts with most other work on truthtelling mechanisms for general online resource allocation, where only efficiency is maximized, and competitiveness can be arbitrarily poor.

FOCS Conference 1997 Conference Paper

Buy-at-Bulk Network Design

  • Baruch Awerbuch
  • Yossi Azar

The essence of the simplest buy-at-bulk network design problem is buying network capacity "wholesale" to guarantee connectivity from all network nodes to a certain central network switch. Capacity is sold with "volume discount": the more capacity is bought, the cheaper is the price per unit of bandwidth. We provide O(log/sup 2/n) randomized approximation algorithm for the problem. This solves the open problem in Salman et al. (1997). The only previously known solutions were restricted to special cases (Euclidean graphs). We solve additional natural variations of the problem, such as multi-sink network design, as well as selective network design. These problems can be viewed as generalizations of the the Generalized Steiner Connectivity and Prize-collecting salesman (K-MST) problems. In the selective network design problem, some subset of /spl kappa/ wells must be connected to the (single) refinery, so that the total cost is minimized.

FOCS Conference 1995 Conference Paper

Load Balancing in the L p Norm

  • Baruch Awerbuch
  • Yossi Azar
  • Edward F. Grove
  • Ming-Yang Kao
  • P. Krishnan
  • Jeffrey Scott Vitter

In the load balancing problem, there is a set of servers, and jobs arrive sequentially. Each job can be run on some subset of the servers, and must be assigned to one of them in an online fashion. Traditionally, the assignment of jobs to servers is measured by the L/sub /spl infin// norm; in other words, an assignment of jobs to servers is quantified by the maximum load assigned to any server. In this measure the performance of the greedy load balancing algorithm may be a logarithmic factor higher than the offline optimal. In many applications, the L/sub /spl infin// norm is not a suitable way to measure how well the jobs are balanced, If each job sees a delay that is proportional to the number of jobs on its server, then the average delay among all jobs is proportional to the sum of the squares of the numbers of jobs assigned to the servers. Minimizing the average delay is equivalent to minimizing the Euclidean (or L/sub 2/) norm. For any fixed p, 1/spl les/p</spl infin/, we show that the greedy algorithm performs within a constant factor of the offline optimal with respect to the L/sub p/ norm. The constant grows linearly with p, which is best possible, but does not depend on the number of servers and jobs.

FOCS Conference 1994 Conference Paper

Local Optimization of Global Objectives: Competitive Distributed Deadlock Resolution and Resource Allocation

  • Baruch Awerbuch
  • Yossi Azar

The work is motivated by deadlock resolution and resource allocation problems, occurring in distributed server-client architectures. We consider a very general setting which includes, as special cases, distributed bandwidth management in communication networks, as well as variations of classical problems in distributed computing and communication networking such as deadlock: resolution and "dining philosophers". In the current paper, we exhibit first local solutions with globally-optimum performance guarantees. An application of our method is distributed bandwidth management in communication networks. In this setting, deadlock resolution (and maximum fractional independent set) corresponds to admission control maximizing network throughput. Job scheduling (and minimum fractional coloring) corresponds to route selection that minimizes load. >

FOCS Conference 1993 Conference Paper

Throughput-Competitive On-Line Routing

  • Baruch Awerbuch
  • Yossi Azar
  • Serge A. Plotkin

We develop a framework that allows us to address the issues of admission control and routing in high-speed networks under the restriction that once a call is admitted and routed, it has to proceed to completion and no reroutings are allowed. The "no rerouting" restriction appears in all the proposals for future high-speed networks and stems from current hardware limitations, in particular the fact that the bandwidth-delay product of the newly developed optical communication links far exceeds the buffer capacity of the network. In case the goal is to maximize the throughput, our framework yields an on-line O(log nT)-competitive strategy, where n is the number of nodes in the network and T is the maximum call duration. In other words, our strategy results in throughput that is within O(log nT) factor of the highest possible throughput achievable by an omniscient algorithm that knows all of the requests in advance. Moreover, we show that no on-line strategy can achieve a better competitive ratio. Our framework leads to competitive strategies applicable in several more general settings. Extensions include assigning each connection an associated "profit" that represents the importance of this connection, and addressing the issue of call-establishment costs. >

FOCS Conference 1992 Conference Paper

On-line Load Balancing (Extended Abstract)

  • Yossi Azar
  • Andrei Z. Broder
  • Anna R. Karlin

The setup for the authors' problem consists of n servers that must complete a set of tasks. Each task can be handled only by a subset of the servers, requires a different level of service, and once assigned can not be re-assigned. They make the natural assumption that the level of service is known at arrival time, but that the duration of service is not. The on-line load balancing problem is to assign each task to an appropriate server in such a way that the maximum load on the servers is minimized. The authors derive matching upper and lower bounds for the competitive ratio of the on-line greedy algorithm for this problem, namely /sup (3n)2/3///sub 2/(1+o(1)), and derive a lower bound, Omega ( square root n), for any other deterministic or randomized on-line algorithm. >

FOCS Conference 1988 Conference Paper

Parallel Comparison Algorithms for Approximation Problems

  • Noga Alon
  • Yossi Azar

The authors consider that they have n elements from a totally ordered domain and are allowed to perform p parallel comparisons in each time unit (round). They determine, up to a constant factor, the time complexity of several approximation problems in the common parallel comparison tree model of L. G. Valiant, for all admissible values of n, p, and epsilon, where epsilon is an accuracy parameter determining the quality of the required approximation. The problems considered include the approximate maximum problem, approximate sorting, and approximate merging. The results imply, as special cases, all the known results about the time complexity of parallel sorting, parallel merging, and parallel selection of the maximum (in the comparison model). They highlight one very special but representative result concerning the approximate maximum problem. They wish to find, among the given n elements, one which belongs to the biggest n/2, where in each round they are allowed to ask n binary comparisons. They show that log/sup */n+ Theta (1) rounds are both necessary and sufficient in the best algorithm for this problem. >

FOCS Conference 1987 Conference Paper

The Average Complexity of Deterministic and Randomized Parallel Comparison Sorting Algorithms

  • Noga Alon
  • Yossi Azar

In practice, the average time of (deterministic or randomized) sorting algorithms seems to be more relevant than the worst case time of deterministic algorithms. Still, the many known complexity bounds for parallel comparison sorting include no nontrivial lower bounds for the average time required to sort by comparisons n elements with p processors (via deterministic or randomized algorithms). We show that for p ≥ n this time is Θ (log n/log(1 + p/n)), (it is easy to show that for p ≤ n the time is Θ (n log n/p) = Θ (log n/(p/n)). Therefore even the average case behaviour of randomized algorithms is not more efficient than the worst case behaviour of deterministic ones.

FOCS Conference 1986 Conference Paper

Tight Complexity Bounds for Parallel Comparison Sorting

  • Noga Alon
  • Yossi Azar
  • Uzi Vishkin

The time complexity of sorting n elements using p ≥ n processors on Valiant's parallel comparison tree model is considered. The following results are obtained. 1. We show that this time complexity is Θ(logn/log(1+p/n)). This complements the AKS sorting network in settling the wider problem of comparison sort of n elements by p processors, where the problem for p ≤ n was resolved. To prove the lower bound, we show that to achieve time k ≤ logn, we need Ω(kn1+1/k) comparisons. Häggkvist and Hell proved a similar result only for fixed k. 2. For every fixed time k, we show that: (a) Ω(n1+1/k lognl/k) comparisons are required, (O(n1+1/k logn) are known to be sufficient in this case), and (b) there exists a randomized algorithm for comparison sort in time k with an expected number of O(n1+1/k) comparisons. This implies that for every fixed k, any deterministic comparison sort algorithm must be asymptotically worse than this randomized algorithm. The lower bound improves on Häggkvist-Hell's lower bound. 3. We show that "approximate sorting" in time 1 requires asymptotically more than nlogn processors. This settles a problem raised by M. Rabin.