Arrow Research search

Author name cluster

Joseph Naor

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

60 papers
2 author rows

Possible papers

60

NeurIPS Conference 2021 Conference Paper

Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

  • Itay Hubara
  • Brian Chmiel
  • Moshe Island
  • Ron Banner
  • Joseph Naor
  • Daniel Soudry

Unstructured pruning reduces the memory footprint in deep neural networks (DNNs). Recently, researchers proposed different types of structural pruning intending to reduce also the computation complexity. In this work, we first suggest a new measure called mask-diversity which correlates with the expected accuracy of the different types of structural pruning. We focus on the recently suggested N: M fine-grained block sparsity mask, in which for each block of M weights, we have at least N zeros. While N: M fine-grained block sparsity allows acceleration in actual modern hardware, it can be used only to accelerate the inference phase. In order to allow for similar accelerations in the training phase, we suggest a novel transposable fine-grained sparsity mask, where the same mask can be used for both forward and backward passes. Our transposable mask guarantees that both the weight matrix and its transpose follow the same sparsity pattern; thus, the matrix multiplication required for passing the error backward can also be accelerated. We formulate the problem of finding the optimal transposable-mask as a minimum-cost flow problem. Additionally, to speed up the minimum-cost flow computation, we also introduce a fast linear-time approximation that can be used when the masks dynamically change during training. Our experiments suggest a 2x speed-up in the matrix multiplications with no accuracy degradation over vision and language models. Finally, to solve the problem of switching between different structure constraints, we suggest a method to convert a pre-trained model with unstructured sparsity to an N: M fine-grained block sparsity model with little to no training. A reference implementation can be found at https: //github. com/papers-submission/structured transposable masks.

SODA Conference 2019 Conference Paper

k-Servers with a Smile: Online Algorithms via Projections

  • Niv Buchbinder
  • Anupam Gupta 0001
  • Marco Molinaro 0001
  • Joseph Naor

We consider the k -server problem on trees and HSTs. We give an algorithm based on Bregman projections. This algorithm has a competitive ratios that match some of the recent results given by Bubeck et al. (STOC 2018), whose algorithm was based on mirror-descent-based continuous dynamics prescribed via a differential inclusion.

SODA Conference 2017 Conference Paper

O (depth)-Competitive Algorithm for Online Multi-level Aggregation

  • Niv Buchbinder
  • Moran Feldman
  • Joseph Naor
  • Ohad Talmon

We consider a multi-level aggregation problem in a weighted rooted tree, studied recently by Bienkowski et al. [7]. In this problem requests arrive over time at the nodes of the tree, and each request specifies a deadline. A request is served by sending it to the root before its deadline at a cost equal to the weight of the path from the node in which it resides to the root. However, requests from different nodes can be aggregated, and served together, so as to save on cost. The cost of serving an aggregated set of requests is equal to the weight of the subtree spanning the nodes in which the requests reside. Thus, the problem is to find a competitive online aggregation algorithm that minimizes the total cost of the aggregated requests. This problem arises naturally in many scenarios, including multicasting, supply- chain management and sensor networks. It is also related to the well studied TCP-acknowledgement problem and the online joint replenishment problem. We present an online O (D)-competitive algorithm for the problem, where D is the depth, or number of levels, of the aggregation tree. This result improves upon the D 2 2 d - competitive algorithm obtained recently by Bienkowski et al. [7].

FOCS Conference 2016 Conference Paper

Online Algorithms for Covering and Packing Problems with Convex Objectives

  • Yossi Azar
  • Niv Buchbinder
  • T. -H. Hubert Chan
  • Shahar Chen
  • Ilan Reuven Cohen
  • Anupam Gupta 0001
  • Zhiyi Huang 0002
  • Ning Kang 0001

We present online algorithms for covering and packing problems with (non-linear) convex objectives. The convex covering problem is defined as: min xϵ R + n f(x) s. t. Ax ≥ 1, where f: R + n → R + is a monotone convex function, and A is an m×n matrix with non-negative entries. In the online version, a new row of the constraint matrix, representing a new covering constraint, is revealed in each step and the algorithm is required to maintain a feasible and monotonically non-decreasing assignment x over time. We also consider a convex packing problem defined as: max yϵR+ m Σ j=1 m yj - g(A T y), where g: R + n →R + is a monotone convex function. In the online version, each variable yj arrives online and the algorithm must decide the value of yj on its arrival. This represents the Fenchel dual of the convex covering program, when g is the convex conjugate of f. We use a primal-dual approach to give online algorithms for these generic problems, and use them to simplify, unify, and improve upon previous results for several applications.

SODA Conference 2014 Conference Paper

Competitive Analysis via Regularization

  • Niv Buchbinder
  • Shahar Chen
  • Joseph Naor

We provide a framework for designing competitive online algorithms using regularization, a widely used technique in online learning, particularly in online convex optimization. An online algorithm that uses regularization serves requests by computing a solution, in each step, to an objective function involving a smooth convex regularization function. Applying the technique of regularization allows us to obtain new results in the domain of competitive analysis. We remark that competitive analysis and online learning are two widely studied frameworks for online decision-making settings. We show that even though there are significant differences in assumptions, goals, and techniques between the two fields, one can still benefit by introducing techniques from one field to the other. In our new framework we exhibit a general O (log m )-competitive deterministic algorithm for generating a fractional solution that satisfies a time-varying set of online covering and precedence constraints, where m is the number of variables. This framework allows to incorporate both service costs (over time) and setup costs into a host of applications. We then provide an O (log m log n )-competitive randomized algorithm for the online set cover problem with service cost, where m is the number of sets and n is the number of elements. This model allows for sets to be both added and deleted over time from a solution.

SODA Conference 2014 Conference Paper

Submodular Maximization with Cardinality Constraints

  • Niv Buchbinder
  • Moran Feldman
  • Joseph Naor
  • Roy Schwartz 0002

We consider the problem of maximizing a (non-monotone) submodular function subject to a cardinality constraint. In addition to capturing well-known combinatorial optimization problems, e. g. , Max- k -Coverage and Max-Bisection, this problem has applications in other more practical settings such as natural language processing, information retrieval, and machine learning. In this work we present improved approximations for two variants of the cardinality constraint for non-monotone functions. When at most k elements can be chosen, we improve the current best approximation to a factor that is in the range [ ], achieving a tight approximation of for and breaking the barrier for all values of k. When exactly k elements must be chosen, our algorithms improve the current best approximation to a factor that is in the range [0. 356, ], again achieving a tight approximation of for. Additionally, some of the algorithms we provide are very fast with time complexities of O ( nk ), as opposed to previous known algorithms which are continuous in nature, and thus, too slow for applications in the practical settings mentioned above. Our algorithms are based on two new techniques. First, we present a simple randomized greedy approach where in each step a random element is chosen from a set of “reasonably good” elements. This approach might be considered a natural substitute for the greedy algorithm of Nemhauser, Wolsey and Fisher [45], as it retains the same tight guarantee of for monotone objectives and the same time complexity of O ( nk ), while giving an approximation of for general non-monotone objectives (while the greedy algorithm of Nemhauser et. al. fails to provide any constant guarantee). Second, we extend the double greedy technique, which achieves a tight approximation for unconstrained submodular maximization, to the continuous setting. This allows us to manipulate the natural rates by which elements change, thus bounding the total number of elements chosen.

FOCS Conference 2012 Conference Paper

A Tight Linear Time (1/2)-Approximation for Unconstrained Submodular Maximization

  • Niv Buchbinder
  • Moran Feldman
  • Joseph Naor
  • Roy Schwartz 0002

We consider the Unconstrained Submodular Maximization problem in which we are given a non-negative submodular function f: 2 N → ℝ +, and the objective is to find a subset S ⊆ N maximizing f(S). This is one of the most basic submodular optimization problems, having a wide range of applications. Some well known problems captured by Unconstrained Submodular Maximization include MaxCut, Max-DiCut, and variants of Max-SAT and maximum facility location. We present a simple randomized linear time algorithm achieving a tight approximation guarantee of 1/2, thus matching the known hardness result of Feige et al. [11]. Our algorithm is based on an adaptation of the greedy approach which exploits certain symmetry properties of the problem. Our method might seem counterintuitive, since it is known that the greedy algorithm fails to achieve any bounded approximation factor for the problem.

FOCS Conference 2011 Conference Paper

A Polylogarithmic-Competitive Algorithm for the k-Server Problem

  • Nikhil Bansal 0001
  • Niv Buchbinder
  • Aleksander Madry
  • Joseph Naor

We give the first polylogarithmic-competitive randomized algorithm for the k-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of Õ(log 3 n log 2 k) for any metric space on n points. This improves upon the (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou (J. ACM 1995) whenever n is sub-exponential in k.

FOCS Conference 2011 Conference Paper

A Unified Continuous Greedy Algorithm for Submodular Maximization

  • Moran Feldman
  • Joseph Naor
  • Roy Schwartz 0002

The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the sub- modular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called "continuous greedy", successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications. For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about 1/e. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of 1-1/e. Some notable immediate implications are an improved 1/e-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints, and information-theoretic tight approximations for Submodular Max-SAT and Submodular Welfare with k players, for any number of players k. A framework for submodular optimization problems, called the contention resolution framework, was introduced recently by Chekuri et al. [11]. The improved approximation ratio of the unified continuous greedy algorithm implies improved ap- proximation ratios for many problems through this framework. Moreover, via a parameter called stopping time, our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced con- tention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.

FOCS Conference 2011 Conference Paper

Min-max Graph Partitioning and Small Set Expansion

  • Nikhil Bansal 0001
  • Uriel Feige
  • Robert Krauthgamer
  • Konstantin Makarychev
  • Viswanath Nagarajan
  • Joseph Naor
  • Roy Schwartz 0002

We study graph partitioning problems from a min-max perspective, in which an input graph on n vertices should be partitioned into k parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are: (i) the k parts need to be of equal size, and (ii) the parts must separate a set of k given terminals. We consider a common generalization of these two problems, and design for it an O(√log n log k)-approximation algorithm. This improves over an O(log 2 n) approximation for the second version due to Svitkina and Tardos, and roughly O(k log n) approximation for the first version that follows from other previous work. We also give an improved O(1)-approximation algorithm for graphs that exclude any fixed minor. Our algorithm uses a new procedure for solving the Small Set Expansion problem. In this problem, we are given a graph G and the goal is to find a non-empty subset S of V of size at most pn with minimum edge-expansion. We give an O(√log n log (1/p)) bicriteria approximation algorithm for the general case of Small Set Expansion and O(1) approximation algorithm for graphs that exclude any fixed minor.

FOCS Conference 2011 Conference Paper

Online Node-Weighted Steiner Tree and Related Problems

  • Joseph Naor
  • Debmalya Panigrahi
  • Mohit Singh

We obtain the first online algorithms for the node-weighted Steiner tree, Steiner forest and group Steiner tree problems that achieve a poly-logarithmic competitive ratio. Our algorithm for the Steiner tree problem runs in polynomial time, while those for the other two problems take quasi-polynomial time. Our algorithms can be viewed as online LP rounding algorithms in the framework of Buchbinder and Naor (Foundations and Trends in Theoretical Computer Science, 2009); however, while the natural LP formulation of these problems do lead to fractional algorithms with a poly-logarithmic competitive ratio, we are unable to round these LPs online without losing a polynomial factor. Therefore, we design new LP formulations for these problems drawing on a combination of paradigms such as spider decompositions, low-depth Steiner trees, generalized group Steiner problems, etc. and use the additional structure provided by these to round the more sophisticated LPs losing only a poly-logarithmic factor in the competitive ratio. As further applications of our techniques, we also design polynomial-time online algorithms with poly-logarithmic competitive ratios for two fundamental network design problems in edge-weighted graphs: the group Steiner forest problem (thereby resolving an open question raised by Chekuri et. al. (SODA 2008)) and the single source ℓ-vertex connectivity problem (which complements similar results for the corresponding edge-connectivity problem due to Gupta et. al. (STOC 2009)).

SODA Conference 2009 Conference Paper

Partitioning graphs into balanced components

  • Robert Krauthgamer
  • Joseph Naor
  • Roy Schwartz 0002

We consider the k-balanced partitioning problem, where the goal is to partition the vertices of an input graph G into k equally sized components, while minimizing the total weight of the edges connecting different components. We allow k to be part of the input and denote the cardinality of the vertex set by n. This problem is a natural and important generalization of well-known graph partitioning problems, including minimum bisection and minimum balanced cut. We present a (bi-criteria) approximation algorithm achieving an approximation of, which matches or improves over previous algorithms for all relevant values of k. Our algorithm uses a semidefinite relaxation which combines metrics with spreading metrics. Surprisingly, we show that the integrality gap of the semidefinite relaxation is Ω(log k ) even for large values of k (e. g. , k = n Ω(1) ), implying that the dependence on k of the approximation factor is necessary. This is in contrast to previous approximation algorithms for k -balanced partitioning, which are based on linear programming relaxations and their approximation factor is independent of k.

STOC Conference 2008 Conference Paper

Randomized competitive algorithms for generalized caching

  • Nikhil Bansal 0001
  • Niv Buchbinder
  • Joseph Naor

We consider online algorithms for the generalized caching problem. Here we are given a cache of size k and pages with arbitrary sizes and fetching costs. Given a request sequence of pages, the goal is to minimize the total cost of fetching the pages into the cache. We give an online algorithm with competitive ratio O(log 2 k), which is the first algorithm for the problem with competitive ratio sublinear in k. We also give improved O(log k)-competitive algorithms for the special cases of the Bit Model and Fault model. In the Bit Model, the fetching cost is proportional to the size of the page and in the Fault model all fetching costs are uniform. Previously, an O(log 2 k)-competitive algorithm due to Irani [14] was known for both of these models. Our algorithms are based on an extension of the primal-dual framework for online algorithms which was developed by Buchbinder and Naor [7]. We first generate an O(log k)-competitive fractional algorithm for the problem. This is done by using a strengthened LP formulation with knapsack-cover constraints, where exponentially many constraints are added upon arrival of a new request. Second, we round online the fractional solution and obtain a randomized online algorithm. Our techniques provide a unified framework for caching algorithms and are substantially simpler than those previously used.

FOCS Conference 2007 Conference Paper

A Primal-Dual Randomized Algorithm for Weighted Paging

  • Nikhil Bansal 0001
  • Niv Buchbinder
  • Joseph Naor

In the weighted paging problem there is a weight (cost) for fetching each page into the cache. We design a randomized O(log k) -competitive online algorithm for the weighted paging problem, where k is the cache size. This is the first randomized o(k)-competitive algorithm and its competitiveness matches the known lower bound on the problem. More generally, we design an O(log(k/(k - h + I)))-competitive online algorithm for the version of the. problem where, the online algorithm has-cache size k and the online algorithm has cache size h les k. Weighted paging is a special case (weighted star metric) of the well known k-server problem for which it is a major open question whether randomization can be useful in obtaining sub-linear competitive algorithms. Therefore, abstracting and extending the insights from paging is a key step in the resolution of the k-server problem. Our solution for the weighted paging problem is based on a two-step approach. In the first step we obtain an O(log k)-competitive fractional algorithm which is based on a novel online primal-dual approach. In the second step we. obtain a randomized algorithm by rounding online the fractional solution to an actual distribution on integral cache, solutions. We conclude with a randomized O(log N)-competitive algorithm for the well studied Metrical Task System problem (MTS) on a metric defined by a weighted star on N leaves, improving upon a previous O(log 2 N)-competitive algorithm of Blum et al. [9].

FOCS Conference 2006 Conference Paper

Improved Bounds for Online Routing and Packing Via a Primal-Dual Approach

  • Niv Buchbinder
  • Joseph Naor

In this work we study a wide range of online and offline routing and packing problems with various objectives. We provide a unified approach, based on a clean primal-dual method, for the design of online algorithms for these problems, as well as improved bounds on the competitive factor. In particular, our analysis uses weak duality rather than a tailor made (i. e. , problem specific) potential function. We demonstrate our ideas and results in the context of routing problems. Using our primal-dual approach, we develop a new generic online routing algorithm that outperforms previous algorithms suggested earlier by Y. Azar et al. (1993, 1997). We then show the applicability of our generic algorithm to various models and provide improved algorithms for achieving coordinate-wise competitiveness, maximizing throughput, and minimizing maximum load. In particular, we improve the results obtained by A. Goel et al. (2001) by an O(log n) factor for the problem of achieving coordinate-wise competitiveness, and by an O(log log n) factor for the problem of maximizing the throughput. For some of the settings we also prove improved lower bounds. We believe our results further our understanding of the applicability of the primal-dual method to online algorithms, and we are confident that the method will prove useful to other online scenarios. Finally, we revisit the notions of coordinate-wise and prefix competitiveness in an offline setting. We design the first polynomial time algorithm that computes an almost optimal coordinate-wise routing for several routing models. We also revisit previously studied routing models by A. Kumar and J. M. Kleinberg (2000) and A. Goel and A. Meyerson (2005) and prove tight lower and upper bounds of Theta(log n) on prefix competitiveness for these models

FOCS Conference 2004 Conference Paper

Machine Minimization for Scheduling Jobs with Interval Constraints

  • Julia Chuzhoy
  • Sudipto Guha
  • Sanjeev Khanna
  • Joseph Naor

The problem of scheduling jobs with interval constraints is a well-studied classical scheduling problem. The input to the problem is a collection of n jobs where each job has a set of intervals on which it can be scheduled. The goal is to minimize the total number of machines needed to schedule all jobs subject to these interval constraints. In the continuous version, the allowed intervals associated with a job form a continuous time segment, described by a release date and a deadline. In the discrete version of the problem, the set of allowed intervals for a job is given explicitly. So far, only an O(log n/( log log n))-approximation is known for either version of the problem, obtained by a randomized rounding of a natural linear programming relaxation of the problem. In fact, we show here that this analysis is tight for both versions of the problem by providing a matching lower bound on the integrality gap of the linear program. Moreover, even when all jobs can be scheduled on a single machine, the discrete case has recently been shown to be /spl Omega/(log log n)-hard to approximate. In this paper, we provide improved approximation factors for the number of machines needed to schedule all jobs in the continuous version of the problem. Our main result is an O(1)-approximation algorithm when the optimal number of machines needed is bounded by a fixed constant. Thus, our results separate the approximability of the continuous and the discrete cases of the problem. For general instances, we strengthen the natural linear programming relaxation in a recursive manner by forbidding certain configurations which cannot arise in an integral feasible solution. This yields an O(OPT)-approximation, where OPT denotes the number of machines needed by an optimal solution. Combined with earlier results, our work implies an O(/spl radic/log n/(log log n))-approximation for any value of OPT.

FOCS Conference 2004 Conference Paper

The Hardness of Metric Labeling

  • Julia Chuzhoy
  • Joseph Naor

The metric labeling problem is an elegant and powerful mathematical model capturing a wide range of classification problems. The input to the problem consists of a set of labels and a weighted graph. Additionally, a metric distance function on the labels is defined, and for each label and each vertex, an assignment cost is given. The goal is to find a minimum-cost assignment of the vertices to the labels. The cost of the solution consists of two parts: the assignment costs of the vertices and the separation costs of the edges (each edge pays its weight times the distance between the two labels to which its endpoints are assigned). Due to the simple structure and variety of the applications, the problem and its special cases (with various distance functions on the labels) have recently received much attention. Metric labeling has a known logarithmic approximation, and it has been an open question for several years whether a constant approximation exists. We refute this possibility and show that no constant approximation can be obtained for the problem unless P=NP, and we also show that the problem is /spl Omega/(/spl radic/logn)-hard to approximate, unless NP has quasi-polynomial time algorithms.

FOCS Conference 2002 Conference Paper

Covering Problems with Hard Capacities

  • Julia Chuzhoy
  • Joseph Naor

We consider the classical vertex cover and set cover problems with the addition of hard capacity constraints. This means that a set (vertex) can only cover a limited number of its elements (adjacent edges) and the number of available copies of each set (vertex) is bounded. This is a natural generalization of the classical problems that also captures resource limitations in practical scenarios. We obtain the following results. For the unweighted vertex cover problem with hard capacities we give a 3-approximation algorithm which is based on randomized rounding with alterations. We prove that the weighted version is at least as hard as the set cover problem. This is an interesting separation between the approximability of weighted and unweighted versions of a "natural" graph problem. A logarithmic approximation factor for both the set cover and the weighted vertex cover problem with hard capacities follows from the work of Wolsey (1982) on submodular set cover. We provide in this paper a simple and intuitive proof for this bound.

FOCS Conference 1997 Conference Paper

A 2-Approximation Algorithm for the Directed Multiway Cut Problem

  • Joseph Naor
  • Leonid Zosin

A directed multiway cut separates a set of terminals s/sub 1/, .. ., s/sub /spl kappa// in a directed capacitated graph G=(V, E). Finding a minimum capacity directed multiway cut is an NP-complete problem. We give a polynomial-time algorithm that achieves an approximation factor of 2 for this problem. This improves the result of Garg, Vazirani and Yannakakis (1994) who gave an algorithm that achieves an approximation factor of 2 log /spl kappa/. Our approximation algorithm uses a novel technique for relaxing a multiway flow function in order to find a directed multiway cut. It also implies that the integrality gap of the linear program for the directed multiway cut problem is at most 2.

FOCS Conference 1997 Conference Paper

Improved Approximations for Shallow-Light Spanning Trees

  • Joseph Naor
  • Baruch Schieber

We consider the bicriteria optimization problem of computing a shallow-light tree. Given a directed graph with two unrelated cost functions defined on its edges: weight and length, and a designated root vertex, the goal is to find a minimum weight spanning tree such that the path lengths from its root to the rest of the vertices are bounded. This problem has several applications in network and VLSI design, and information retrieval. We give a polynomial time algorithm for finding a spanning tree whose weight is O(log |V|) times the weight of an optimal shallow-light tree, where the path lengths from the root to the rest of the vertices are at most twice the given bounds. We extend our technique to handle two variants of the problem: one in which the length bound is given on the average length of a path from the root to a vertex, and another tricriteria budgeted version. Our paper provides the first non-trivial approximation factors for directed graphs, and improves on previous results for undirected graphs.

FOCS Conference 1996 Conference Paper

An 8-Approximation Algorithm for the Subset Feedback Vertex Set Problem

  • Guy Even
  • Joseph Naor
  • Leonid Zosin

We present an 8-approximation algorithm for the problem of finding a minimum weight subset feedback vertex set. The input in this problem consists of an undirected graph G=(V, E) with vertex weights w(v) and a subset of vertices S called special vertices. A cycle is called interesting if it contains at least one special vertex. A subset of vertices is called a subset feedback vertex set with respect to S if it intersects every interesting cycle The goal is to find a minimum weight subset feedback vertex set. The best pervious algorithm for the general case provided only a logarithmic approximation factor. The minimum weight subset feedback vertex set problem generalizes two NP-Complete problems: the minimum weight feedback vertex set problem in undirected graphs and the minimum weight multiway vertex cut problem. The main tool that we use in our algorithm and its analysis is a new version of multi-commodity flow which we call relaxed multi-commodity flow. Relaxed multi-commodity flow is a hybrid of multi-commodity flow and multi-terminal flow.

FOCS Conference 1995 Conference Paper

Divide-and-Conquer Approximation Algorithms via Spreading Metrics (Extended Abstract)

  • Guy Even
  • Joseph Naor
  • Satish Rao
  • Baruch Schieber

We present a novel divide-and-conquer paradigm for approximating NP-hard graph optimization problems. The paradigm models graph optimization problems that satisfy two properties: First, a divide-and-conquer approach is applicable. Second, a fractional spreading metric is computable in polynomial time. The spreading metric assigns fractional lengths to either edges or vertices of the input graph, such that all subgraphs on which the optimisation problem is non-trivial have large diameters. In addition, the spreading metric provides a lower bound, /spl tau/, on the cost of solving the optimization problem. We present a polynomial time approximation algorithm for problems modelled by our paradigm whose approximation factor is O (mi.

FOCS Conference 1995 Conference Paper

The Loading Time Scheduling Problem (Extended Abstract)

  • Randeep Bhatia
  • Samir Khuller
  • Joseph Naor

In this paper we study precedence constrained scheduling problems, where the tasks can only be executed on a specified subset of the machines. Each machine has a loading time that is incurred only for the first task that is scheduled on the machine in a particular run. This basic scheduling problem arises in the context of machining on numerically controlled machines, query optimization in databases, and in other artificial intelligence applications. We give the first non-trivial approximation algorithm for this problem. We also prove non-trivial lower bounds on best possible approximation ratios for these problems. These improve on the non-approximability results that are implied by the non-approximability results for the shortest common supersequence problem. We use the same algorithmic technique to obtain approximation algorithms for a problem arising in the context of code generation for parallel machines, and for the weighted shortest common supersequence problem.

FOCS Conference 1989 Conference Paper

Flow in Planar Graphs with Multiple Sources and Sinks (Extended Abstract)

  • Gary L. Miller
  • Joseph Naor

Given a planar network with many sources and sinks, the problem of computing the maximum flow from the sources to the sinks is investigated. An algorithm that runs in O(log/sup 2/n) time using O(n/sup 1. 5/) processors on an exclusive-read-exclusive-write parallel random-access machine (EREW PRAM) is obtained, when the amount of flow (demand) at each source and sink is assumed as input. When the demands are unknown, the problem remains open. However, in the special case in which the sources and sinks are all on one face (and the demands unknown), an algorithm that computes the maximum flow with time complexity O(log/sup 3/n log log n) using O(n/sup 1. 5/) processors is given. The results also hold for more general networks, namely, when the edge capacities have both lower and upper bounds. >

FOCS Conference 1989 Conference Paper

The Probabilistic Method Yields Deterministic Parallel Algorithms

  • Rajeev Motwani 0001
  • Joseph Naor
  • Moni Naor

A method is provided for converting randomized parallel algorithms into deterministic parallel algorithms. The approach is based on a parallel implementation of the method of conditional probabilities. Results obtained by applying the method to the set balancing problem, lattice approximation, edge-coloring graphs, random sampling, and combinatorial constructions are presented. The general form in which the method of conditional probabilities is applied sequentially is described. The reason why this form does not lend itself to parallelization are discussed. The general form of the case for which the method of conditional probabilities can be applied in the parallel context is given. >