Arrow Research search

Author name cluster

Alex Elenter

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2025 Conference Paper

Scenario-Based Robust Optimization of Tree Structures

  • Spyros Angelopoulos
  • Christoph Dürr
  • Alex Elenter
  • Georgii Melidi

We initiate the study of tree structures in the context of scenario-based robust optimization. Specifically, we study Binary Search Trees (BSTs) and Huffman coding, two fundamental techniques for efficiently managing and encoding data based on a known set of frequencies of keys. Given a number of distinct scenarios, each defined by a frequency distribution over the keys, our objective is to compute a single tree of best-possible performance, relative to any scenario. We consider, as performance metrics, the competitive ratio, which compares multiplicatively the cost of the solution to the tree of least cost among all scenarios, as well as the regret, which induces a similar, but additive comparison. For BSTs, we show that the problem is NP-hard across both metrics. We also obtain an optimal competitive ratio that is logarithmic in the number of scenarios. For Huffman Trees, we likewise prove NP-hardness, and we present an algorithm with logarithmic regret, which we prove to be near-optimal by showing a corresponding lower bound. Last, we give a polynomial-time algorithm for computing Pareto-optimal BSTs with respect to their regret, assuming scenarios defined by uniform distributions over the keys. This setting captures, in particular, the first study of fairness in the context of data structures. We provide an experimental evaluation of all algorithms. To this end, we also provide mixed integer linear program formulation for computing optimal trees.

NeurIPS Conference 2024 Conference Paper

Overcoming Brittleness in Pareto-Optimal Learning Augmented Algorithms

  • Alex Elenter
  • Spyros Angelopoulos
  • Christoph Dürr
  • Yanni Lefki

The study of online algorithms with machine-learned predictions has gained considerable prominence in recent years. One of the common objectives in the design and analysis of such algorithms is to attain (Pareto) optimal tradeoffs between the {\em consistency} of the algorithm, i. e. , its performance assuming perfect predictions, and its {\em robustness}, i. e. , the performance of the algorithm under adversarial predictions. In this work, we demonstrate that this optimization criterion can be extremely brittle, in that the performance of Pareto-optimal algorithms may degrade dramatically even in the presence of imperceptive prediction error. To remedy this drawback, we propose a new framework in which the smoothness in the performance of the algorithm is enforced by means of a {\em user-specified profile}. This allows us to regulate the performance of the algorithm as a function of the prediction error, while simultaneouslymaintaining the analytical notion of consistency/robustness tradeoffs, adapted to the profile setting. We apply this new approach to a well-studied online problem, namely the {\em one-way trading} problem. For this problem, we further address another limitation of the state-of-the-art Pareto-optimal algorithms, namely the fact that they are tailored to worst-case, and extremely pessimistic inputs. We propose a new Pareto-optimal algorithm that leverages any deviation from the worst-case input to its benefit, and introduce a new metric that allows us to compare any two Pareto-optimal algorithms via a {\em dominance} relation.