Arrow Research search

Author name cluster

Christian Coester

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

12 papers
2 author rows

Possible papers

12

NeurIPS Conference 2024 Conference Paper

Learning-Augmented Priority Queues

  • Ziyad Benomar
  • Christian Coester

Priority queues are one of the most fundamental and widely used data structures in computer science. Their primary objective is to efficiently support the insertion of new elements with assigned priorities and the extraction of the highest priority element. In this study, we investigate the design of priority queues within the learning-augmented framework, where algorithms use potentially inaccurate predictions to enhance their worst-case performance. We examine three prediction models spanning different use cases, and we show how the predictions can be leveraged to enhance the performance of priority queue operations. Moreover, we demonstrate the optimality of our solution and discuss some possible applications.

ICML Conference 2023 Conference Paper

Mixing Predictions for Online Metric Algorithms

  • Antonios Antoniadis 0001
  • Christian Coester
  • Marek Eliás 0001
  • Adam Polak 0001
  • Bertrand Simon 0001

A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of $\ell$ predictors, we obtain a competitive ratio of $O(\ell^2)$, and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.

NeurIPS Conference 2023 Conference Paper

Sorting with Predictions

  • Xingjian Bai
  • Christian Coester

We explore the fundamental problem of sorting through the lens of learning-augmented algorithms, where algorithms can leverage possibly erroneous predictions to improve their efficiency. We consider two different settings: In the first setting, each item is provided a prediction of its position in the sorted list. In the second setting, we assume there is a ``quick-and-dirty'' way of comparing items, in addition to slow-and-exact comparisons. For both settings, we design new and simple algorithms using only $O(\sum_i \log \eta_i)$ exact comparisons, where $\eta_i$ is a suitably defined prediction error for the $i$th element. In particular, as the quality of predictions deteriorates, the number of comparisons degrades smoothly from $O(n)$ to $O(n\log n)$. We prove that this comparison complexity is theoretically optimal with respect to the examined error measures. An experimental evaluation against existing adaptive and non-adaptive sorting algorithms demonstrates the potential of applying learning-augmented algorithms in sorting tasks.

SODA Conference 2022 Conference Paper

Learning-Augmented Weighted Paging

  • Nikhil Bansal 0001
  • Christian Coester
  • Ravi Kumar 0001
  • Manish Purohit
  • Erik Vee

We consider a natural semi-online model for weighted paging, where at any time the algorithm is given predictions, possibly with errors, about the next arrival of each page. The model is inspired by Belady's classic optimal offline algorithm for unweighted paging, and extends the recently studied model for learning-augmented paging [45, 50, 52] to the weighted setting. For the case of perfect predictions, we provide an ℓ -competitive deterministic and an O (log ℓ )-competitive randomized algorithm, where ℓ is the number of distinct weight classes. Both these bounds are tight, and imply an O (log W )- and O (log log W )-competitive ratio, respectively, when the page weights lie between 1 and W. Previously, it was not known how to use these predictions in the weighted setting and only bounds of k and O (log k ) were known, where k is the cache size. Our results also generalize to the interleaved paging setting and to the case of imperfect predictions, with the competitive ratios degrading smoothly from O ( ℓ ) and O (log ℓ ) to O ( k ) and O (log k ), respectively, as the prediction error increases. Our results are based on several insights on structural properties of Belady's algorithm and the sequence of page arrival predictions, and novel potential functions that incorporate these predictions. For the case of unweighted paging, the results imply a very simple potential function based proof of the optimality of Belady's algorithm, which may be of independent interest.

FOCS Conference 2022 Conference Paper

Shortest Paths without a Map, but with an Entropic Regularizer

  • Sébastien Bubeck
  • Christian Coester
  • Yuval Rabani

In a 1989 paper titled “shortest paths without a map”, Papadimitriou and Yannakakis introduced an online model of searching in a weighted layered graph for a target node, while attempting to minimize the total length of the path traversed by the searcher. This problem, later called layered graph traversal, is parametrized by the maximum cardinality k of a layer of the input graph. It is an online setting for dynamic programming, and it is known to be a rather general and fundamental model of online computing, which includes as special cases other acclaimed models. The deterministic competitive ratio for this problem was soon discovered to be exponential in k, and it is now nearly resolved: it lies between $\Omega(2^{k})$ and $O(k2^{k})$. Regarding the randomized competitive ratio, in 1993 Ramesh proved, surprisingly, that this ratio has to be at least $\Omega(k^{2}/log^{1+\varepsilon}k)$ (for any constant $\varepsilon\gt0)$. In the same paper, Ramesh also gave an $O(k^{13})$-competitive randomized online algorithm. Since 1993, no progress has been reported on the randomized competitive ratio of layered graph traversal. In this work we show how to apply the mirror descent framework on a carefully selected evolving metric space, and obtain an $O(k^{2})$ competitive randomized online algorithm, nearly matching the known lower bound on the randomized competitive ratio.

NeurIPS Conference 2021 Conference Paper

Learning-Augmented Dynamic Power Management with Multiple States via New Ski Rental Bounds

  • Antonios Antoniadis
  • Christian Coester
  • Marek Elias
  • Adam Polak
  • Bertrand Simon

We study the online problem of minimizing power consumption in systems with multiple power-saving states. During idle periods of unknown lengths, an algorithm has to choose between power-saving states of different energy consumption and wake-up costs. We develop a learning-augmented online algorithm that makes decisions based on (potentially inaccurate) predicted lengths of the idle periods. The algorithm's performance is near-optimal when predictions are accurate and degrades gracefully with increasing prediction error, with a worst-case guarantee almost identical to the optimal classical online algorithm for the problem. A key ingredient in our approach is a new algorithm for the online ski-rental problem in the learning augmented setting with tight dependence on the prediction error. We support our theoretical findings with experiments.

ICML Conference 2020 Conference Paper

Online metric algorithms with untrusted predictions

  • Antonios Antoniadis 0001
  • Christian Coester
  • Marek Eliás 0001
  • Adam Polak 0001
  • Bertrand Simon 0001

Machine-learned predictors, although achieving very good results for inputs resembling training data, cannot possibly provide perfect predictions in all situations. Still, decision-making systems that are based on such predictors need not only to benefit from good predictions but also to achieve a decent performance when the predictions are inadequate. In this paper, we propose a prediction setup for arbitrary metrical task systems (MTS) (e. g. , caching, k-server and convex body chasing) and online matching on the line. We utilize results from the theory of online algorithms to show how to make the setup robust. Specifically for caching, we present an algorithm whose performance, as a function of the prediction error, is exponentially better than what is achievable for general MTS. Finally, we present an empirical evaluation of our methods on real world datasets, which suggests practicality.

STOC Conference 2020 Conference Paper

Unbounded lower bound for k-server against weak adversaries

  • Marcin Bienkowski
  • Jaroslaw Byrka
  • Christian Coester
  • Lukasz Jez

We study the resource augmented version of the k -server problem, also known as the k -server problem against weak adversaries or the ( h , k )-server problem. In this setting, an online algorithm using k servers is compared to an offline algorithm using h servers, where h ≤ k . For uniform metrics, it has been known since the seminal work of Sleator and Tarjan (1985) that for any є>0, the competitive ratio drops to a constant if k =(1+є) · h . This result was later generalized to weighted stars (Young 1994) and trees of bounded depth (Bansal et al. 2017). The main open problem for this setting is whether a similar phenomenon occurs on general metrics. We resolve this question negatively. With a simple recursive construction, we show that the competitive ratio is at least Ω(loglog h ), even as k →∞. Our lower bound holds for both deterministic and randomized algorithms. It also disproves the existence of a competitive algorithm for the infinite server problem on general metrics.

MFCS Conference 2019 Conference Paper

Better Bounds for Online Line Chasing

  • Marcin Bienkowski
  • Jaroslaw Byrka
  • Marek Chrobak
  • Christian Coester
  • Lukasz Jez
  • Elias Koutsoupias

We study online competitive algorithms for the line chasing problem in Euclidean spaces R^d, where the input consists of an initial point P_0 and a sequence of lines X_1, X_2, .. ., X_m, revealed one at a time. At each step t, when the line X_t is revealed, the algorithm must determine a point P_t in X_t. An online algorithm is called c-competitive if for any input sequence the path P_0, P_1, .. ., P_m it computes has length at most c times the optimum path. The line chasing problem is a variant of a more general convex body chasing problem, where the sets X_t are arbitrary convex sets. To date, the best competitive ratio for the line chasing problem was 28. 1, even in the plane. We improve this bound by providing a simple 3-competitive algorithm for any dimension d. We complement this bound by a matching lower bound for algorithms that are memoryless in the sense of our algorithm, and a lower bound of 1. 5358 for arbitrary algorithms. The latter bound also improves upon the previous lower bound of sqrt{2}~=1. 412 for convex body chasing in 2 dimensions.