Arrow Research search

Author name cluster

Ashish Chiplunkar

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
1 author row

Possible papers

6

FOCS Conference 2022 Conference Paper

Factorial Lower Bounds for (Almost) Random Order Streams

  • Ashish Chiplunkar
  • John Kallaugher
  • Michael Kapralov
  • Eric Price 0001

In this paper we introduce and study the STREAMINGCYCLES problem, a random order streaming version of the Boolean Hidden Hypermatching problem that has been instrumental in streaming lower bounds over the past decade. In this problem the edges of a graph G, comprising n/$\ell$ disjoint length-$\ell$ cycles on n vertices, are partitioned randomly among n players. Every edge is annotated with an independent uniformly random bit, and the players’ task is to output, for some cycle in G, the sum (modulo 2) of the bits on its edges, after one round of sequential communication. Our main result is an $\ell^{\Omega(\ell)}$ lower bound on the communication complexity of STREAMINGCYCLES, which is tight up to constant factors in the exponent. Applications of our lower bound for STREAMINGCYCLES include an essentially tight lower bound for component collection in (almost) random order graph streams, making progress towards a conjecture of Peng and Sohler [SODA’18] and the first exponential space lower bounds for random walk generation.

ICML Conference 2020 Conference Paper

How to Solve Fair k-Center in Massive Data Models

  • Ashish Chiplunkar
  • Sagar Sudhir Kale
  • Sivaramakrishnan Natarajan Ramamoorthy

Fueled by massive data, important decision making is being automated with the help of algorithms, therefore, fairness in algorithms has become an especially important research topic. In this work, we design new streaming and distributed algorithms for the fair k-center problem that models fair data summarization. The streaming and distributed models of computation have an attractive feature of being able to handle massive data sets that do not fit into main memory. Our main contributions are: (a) the first distributed algorithm; which has provably constant approximation ratio and is extremely parallelizable, and (b) a two-pass streaming algorithm with a provable approximation guarantee matching the best known algorithm (which is not a streaming algorithm). Our algorithms have the advantages of being easy to implement in practice, being fast with linear running times, having very small working memory and communication, and outperforming existing algorithms on several real and synthetic data sets. To complement our distributed algorithm, we also give a hardness result for natural distributed algorithms, which holds for even the special case of k-center.

FOCS Conference 2018 Conference Paper

Testing Graph Clusterability: Algorithms and Lower Bounds

  • Ashish Chiplunkar
  • Michael Kapralov
  • Sanjeev Khanna
  • Aida Mousavifar
  • Yuval Peres

We consider the problem of testing graph cluster structure: given access to a graph G = (V, E), can we quickly determine whether the graph can be partitioned into a few clusters with good inner conductance, or is far from any such graph? This is a generalization of the well-studied problem of testing graph expansion, where one wants to distinguish between the graph having good expansion (i. e. being a good single cluster) and the graph having a sparse cut (i. e. being a union of at least two clusters). A recent work of Czumaj, Peng, and Sohler (STOC'15) gave an ingenious sublinear time algorithm for testing k-clusterability in time Õ(n^1/2 poly(k)). Their algorithm implicitly embeds a random sample of vertices of the graph into Euclidean space, and then clusters the samples based on estimates of Euclidean distances between the points. This yields a very efficient testing algorithm, but only works if the cluster structure is very strong: it is necessary to assume that the gap between conductances of accepted and rejected graphs is at least logarithmic in the size of the graph G. In this paper we show how one can leverage more refined geometric information, namely angles as opposed to distances, to obtain a sublinear time tester that works even when the gap is a sufficiently large constant. Our tester is based on the singular value decomposition of a natural matrix derived from random walk transition probabilities from a small sample of seed nodes. We complement our algorithm with a matching lower bound on the query complexity of testing clusterability. Our lower bound is based on a novel property testing problem, which we analyze using Fourier analytic tools. As a byproduct of our techniques, we also achieve new lower bounds for the problem of approximating MAX-CUT value in sublinear time.

SODA Conference 2017 Conference Paper

Polylogarithmic Bounds on the Competitiveness of Min-cost Perfect Matching with Delays

  • Yossi Azar
  • Ashish Chiplunkar
  • Haim Kaplan

We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) recently introduced by Emek et al, (STOC 2016). This problem is defined on an underlying n -point metric space. An adversary presents real-time requests online at points of the metric space, and the algorithm is required to match them, possibly after keeping them waiting for some time. The cost incurred is the sum of the distances between matched pairs of requests (the connection cost), and the sum of the waiting times of the requests (the delay cost). We prove the first logarithmic upper bound and the first polylogarithmic lower bound on the randomized competitive ratio of this problem. We present an algorithm with a competitive ratio of O (log n ), which improves the upper bound of O log 2 n + logΔ) of Emek et al, by removing the dependence on Δ, the aspect ratio of the metric space (which can be unbounded as a function of n ). The core of our algorithm is a deterministic algorithm for MPMD on metrics induced by edge-weighted trees of height h, whose cost is guaranteed to be at most O (1) times the connection cost plus O (h) times the delay cost of every feasible solution. The reduction from MPMD on arbitrary metrics to MPMD on trees is achieved using the result on embedding n -point metric spaces into distributions over weighted hierarchically separated trees of height O (log n ), with distortion O (log n ). We also prove a lower bound of on the competitive ratio of any randomized algorithm. This is the first lower bound which increases with n, and is attained on the metric of n equally spaced points on a line.

FOCS Conference 2013 Conference Paper

On Randomized Memoryless Algorithms for the Weighted K-Server Problem

  • Ashish Chiplunkar
  • Sundar Vishwanathan

The weighted k-server problem is a generalization of the k-server problem in which the cost of moving a server of weight β_i through a distance d is β_i· d. The weighted server problem on uniform spaces models caching where caches have different write costs. We prove tight bounds on the performance of randomized memory less algorithms for this problem on uniform metric spaces. We prove that there is an α_k competitive memory less algorithm for this problem, where α_k=α_k-12+3α_k-1+1; α_1=1. On the other hand, we also prove a lower bound result, which is a strong evidence to our conjecture, that no randomized memory less algorithm can have competitive ratio better than α_k. To prove the upper bound of α_k, we develop a framework to bound from above the competitive ratio of any randomized memory less algorithm for this problem. The key technical contribution is a method for working with potential functions defined implicitly as the solution of a linear system. The result is robust in the sense that a small change in the probabilities used by the algorithm results in a small change in the upper bound on the competitive ratio. The above result has two important implications. Firstly this yields an α_k-competitive memory less algorithm for the weighted k-server problem on uniform spaces. This is the first competitive algorithm for k>2 which is memory less. For k=2, our algorithm agrees with the one given by Chrobak and Sgall. Secondly, this helps us prove that the Harmonic algorithm, which chooses probabilities in inverse proportion to weights, has a competitive ratio of kα_k. The only known competitive algorithm for every k before this work is a carefully crafted deterministic algorithm due to Fiat and Ricklin. Their algorithm uses memory crucially and their bound on competitive ratio more than 24k. Our algorithm is not only memory less, but also has a considerably improved competitive ratio of α_k