Arrow Research search

Author name cluster

Kurt Mehlhorn

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

52 papers
2 author rows

Possible papers

52

AAMAS Conference 2025 Conference Paper

EFX Allocations and Orientations on Bipartite Multi-graphs: A Complete Picture

  • Mahyar Afshinmehr
  • Alireza Danaei
  • Mehrafarin Kazemi
  • Kurt Mehlhorn
  • Nidhi Rathi

We consider the fundamental problem of fairly allocating a set of indivisible items among agents having valuations that are represented by a multi-graph – here, agents appear as the vertices and items as the edges between them and each vertex (agent) only values the set of its incident edges (items). The goal is to find a fair, i. e. , envy-free up to any item (EFX) allocation. This model has recently been introduced by Christodoulou et al. [19] where they show that EFX allocations always exist on simple graphs for monotone valuations, i. e. , where any two agents can share at most one edge (item). A natural question arises as to what happens when we go beyond simple graphs and study various classes of multi-graphs? We answer the above question affirmatively for the valuation class of bipartite multi-graphs and multi-cycles. Our main positive result is that EFX allocations always exist on bipartite multi-graphs for agents with additive valuations and can be computed in polynomial time, thereby joining in the few sets of scenarios where EFX allocations are known to exist for an arbitrary number of agents. Next, we study EFX orientations (i. e. , allocations where every item is allocated to one of its two endpoint agents) and give a complete picture of when they exist for bipartite multi-graphs dependent on two parameters—the number of edges shared between any two agents and the diameter of the graph. Finally, we prove that it is NP-complete to determine whether a given fair division instance on a bipartite multi-graph admits an EFX orientation.

AAAI Conference 2025 Conference Paper

Welfare-Optimal Serial Dictatorships Have Polynomial Query Complexity

  • Ioannis Caragiannis
  • Kurt Mehlhorn
  • Nidhi Rathi

Serial dictatorship is a simple mechanism for coordinating agents in solving combinatorial optimization problems according to their preferences. The most representative such problem is one-sided matching, in which a set of n agents have values for a set of n items, and the objective is to compute a matching of the agents to the items of maximum total value (a.k.a., social welfare). Following the recent framework of Caragiannis and Rathi (2023), we consider a model in which the agent-item values are not available upfront but become known by querying agent sequences. In particular, when the agents are asked to act in a sequence, they respond by picking their favorite item that has not been picked by agents who acted before and reveal their value for it. Can we compute an agent sequence that induces a social welfare-optimal matching? We answer this question affirmatively and present an algorithm that uses polynomial number (specifically, O(n^5) of queries). This solves the main open problem stated by Caragiannis and Rathi (2023). Our analysis uses a potential function argument that measures progress towards learning the underlying edge-weight information. Furthermore, the algorithm has a truthful implementation by adapting the paradigm of VCG payments.

IJCAI Conference 2023 Conference Paper

Fair and Efficient Allocation of Indivisible Chores with Surplus

  • Hannaneh Akrami
  • Bhaskar Ray Chaudhury
  • Jugal Garg
  • Kurt Mehlhorn
  • Ruta Mehta

We study fair division of indivisible chores among n agents with additive disutility functions. Two well-studied fairness notions for indivisible items are envy-freeness up to one/any item (EF1/EFX) and the standard notion of economic efficiency is Pareto optimality (PO). There is a noticeable gap between the results known for both EF1 and EFX in the goods and chores settings. The case of chores turns out to be much more challenging. We reduce this gap by providing slightly relaxed versions of the known results on goods for the chores setting. Interestingly, our algorithms run in polynomial time, unlike their analogous versions in the goods setting. We introduce the concept of k surplus in the chores setting which means that up to k more chores are allocated to the agents and each of them is a copy of an original chore. We present a polynomial-time algorithm which gives EF1 and PO allocations with n-1 surplus. We relax the notion of EFX slightly and define tEFX which requires that the envy from agent i to agent j is removed upon the transfer of any chore from the i's bundle to j's bundle. We give a polynomial-time algorithm that in the chores case for 3 agents returns an allocation which is either proportional or tEFX. Note that proportionality is a very strong criterion in the case of indivisible items, and hence both notions we guarantee are desirable.

NeurIPS Conference 2023 Conference Paper

Randomized and Deterministic Maximin-share Approximations for Fractionally Subadditive Valuations

  • Hannaneh Akrami
  • Kurt Mehlhorn
  • Masoud Seddighin
  • Golnoosh Shahkarami

We consider the problem of guaranteeing maximin-share ($\MMS$) when allocating a set of indivisible items to a set of agents with fractionally subadditive ($\XOS$) valuations. For $\XOS$ valuations, it has been previously shown that for some instances no allocation can guarantee a fraction better than $1/2$ of maximin-share to all the agents. Also, a deterministic allocation exists that guarantees $0. 219225$ of the maximin-share of each agent. Our results involve both deterministic and randomized allocations. On the deterministic side, we improve the best approximation guarantee for fractionally subadditive valuations to $3/13 = 0. 230769$. We develop new ideas on allocating large items in our allocation algorithm which might be of independent interest. Furthermore, we investigate randomized algorithms and the Best-of-both-worlds fairness guarantees. We propose a randomized allocation that is $1/4$-$\MMS$ ex-ante and $1/8$-$\MMS$ ex-post for $\XOS$ valuations. Moreover, we prove an upper bound of $3/4$ on the ex-ante guarantee for this class of valuations.

JAIR Journal 2022 Journal Article

Fair Division of Indivisible Goods for a Class of Concave Valuations

  • Bhaskar Ray Chaudhury
  • Yun Kuen Cheung
  • Jugal Garg
  • Naveen Garg
  • Martin Hoefer
  • Kurt Mehlhorn

We study the fair and efficient allocation of a set of indivisible goods among agents, where each good has several copies, and each agent has an additively separable concave valuation function with a threshold. These valuations capture the property of diminishing marginal returns, and they are more general than the well-studied case of additive valuations. We present a polynomial-time algorithm that approximates the optimal Nash social welfare (NSW) up to a factor of e1/e ≈ 1.445. This matches with the state-of-the-art approximation factor for additive valuations. The computed allocation also satisfies the popular fairness guarantee of envy-freeness up to one good (EF1) up to a factor of 2 + ε. For instances without thresholds, it is also approximately Pareto-optimal. For instances satisfying a large market property, we show an improved approximation factor. Lastly, we show that the upper bounds on the optimal NSW introduced in Cole and Gkatzelis (2018) and Barman et al. (2018) have the same value.

AAAI Conference 2022 Conference Paper

Maximizing Nash Social Welfare in 2-Value Instances

  • Hannaneh Akrami
  • Bhaskar Ray Chaudhury
  • Martin Hoefer
  • Kurt Mehlhorn
  • Marco Schmalhofer
  • Golnoosh Shahkarami
  • Giovanna Varricchio
  • Quentin Vermande

We consider the problem of maximizing the Nash social welfare when allocating a set G of indivisible goods to a set N of agents. We study instances, in which all agents have 2-value additive valuations: The value of every agent i ∈ N for every good j ∈ G is vij ∈ {p, q}, for p, q ∈ N, p ≤ q. In this work, we design an algorithm to compute an optimal allocation in polynomial time if p divides q, i. e. , when p = 1 and q ∈ N after appropriate scaling. The problem is NP-hard whenever p and q are coprime and p ≥ 3. In terms of approximation, we present positive and negative results for general p and q. We show that our algorithm obtains an approximation ratio of at most 1. 0345. Moreover, we prove that the problem is APX-hard, with a lower bound of 1. 000015 achieved at p/q = 4/5.

MFCS Conference 2019 Conference Paper

Trustworthy Graph Algorithms (Invited Talk)

  • Mohammad Abdulaziz
  • Kurt Mehlhorn
  • Tobias Nipkow

The goal of the LEDA project was to build an easy-to-use and extendable library of correct and efficient data structures, graph algorithms and geometric algorithms. We report on the use of formal program verification to achieve an even higher level of trustworthiness. Specifically, we report on an ongoing and largely finished verification of the blossom-shrinking algorithm for maximum cardinality matching.

JELIA Conference 2016 Conference Paper

Opposition Frameworks

  • Cosmina Croitoru
  • Kurt Mehlhorn

Abstract In this paper we introduce opposition frameworks, a generalization of Dung’s argumentation frameworks. While keeping the attack relation as the sole type of interaction between nodes and the abstract level of argumentation frameworks, opposition networks add more flexibility, reducing the gap between structured and abstract argumentation. A guarded attack calculus is developed in order to obtain proper generalizations of Dung’s admissibility-based semantics. The high modeling capabilities of our new setting offer an alternative instantiation solution (of other existing argumentation frameworks) for arguments evaluation.

FOCS Conference 2015 Conference Paper

Pattern-Avoiding Access in Binary Search Trees

  • Parinya Chalermsook
  • Mayank Goswami 0001
  • László Kozma 0002
  • Kurt Mehlhorn
  • Thatchaphol Saranurak

The dynamic optimality conjecture is perhaps the most fundamental open question about binary search trees (BST). It postulates the existence of an asymptotically optimal online BST, i. e. One that is constant factor competitive with any BST on any input access sequence. The two main candidates for dynamic optimality in the literature are splay trees [Sleator and Tarjan, 1985], and Greedy [Lucas, 1988, Munro, 2000, Demaine et al. 2009]. Despite BSTs being among the simplest data structures in computer science, and despite extensive effort over the past three decades, the conjecture remains elusive. Dynamic optimality is trivial for almost all sequences: the optimum access cost of most length-n sequences is Theta(n log n), achievable by any balanced BST. Thus, the obvious missing step towards the conjecture is an understanding of the "easy" access sequences, and indeed the most fruitful research direction so far has been the study of specific sequences, whose "easiness" is captured by a parameter of interest. For instance, splay provably achieves the bound of O(nd) when d roughly measures the distances between consecutive accesses (dynamic finger), the average entropy (static optimality), or the delays between multiple accesses of an element(working set). The difficulty of proving dynamic optimality is witnessed by other highly restricted special cases that remain unresolved, one prominent example is the traversal conjecture [Sleator and Tarjan, 1985], which states that preorder sequences (whose optimum is linear) are linear-time accessed by splay trees, no online BST is known to satisfy this conjecture. In this paper, we prove two different relaxations of the traversal conjecture for Greedy: (i) Greedy is almost linear for preorder traversal, (ii) if a linear-time preprocessing is allowed, Greedy is in fact linear. These statements are corollaries of our more general results that express the complexity of access sequences in terms of a pattern avoidance parameter k. Pattern avoidance is a well-established concept in combinatorics, and the classes of input sequences thus defined are rich, e. g. The k = 3 case includes preorder sequences. For any sequence X with parameter k, our most general result shows that Greedy achieves the cost n*2(Α(n))O(k) where Α is the inverse Ackermann function. Furthermore, a broad subclass of parameter-k sequences has a natural combinatorial interpretation as k-decomposable sequences. For this class of inputs, we obtain an n*2O(k) bound for Greedy when preprocessing is allowed. For k = 3, these results imply (i) and (ii). To our knowledge, these are the first upper bounds for Greedy that are not known to hold for any other online BST. To obtain these results we identify an input-revealing property of Greedy. Informally, this means that the execution log partially reveals the structure of the access sequence. This property facilitates the use of rich technical tools from forbidden sub matrix theory. Further studying the intrinsic complexity of k-decomposable sequences, we make several observations. First, in order to obtain an offline optimal BST, it is enough to bound Greedy on non-decomposable access sequences. Furthermore, we show that the optimal cost for k-decomposable sequences is Theta(n log k), which is well below the proven performance of all known BST algorithms. Hence, sequences in this class can be seen as a "candidate counterexample" to dynamic optimality.

JMLR Journal 2011 Journal Article

Weisfeiler-Lehman Graph Kernels

  • Nino Shervashidze
  • Pascal Schweitzer
  • Erik Jan van Leeuwen
  • Kurt Mehlhorn
  • Karsten M. Borgwardt

In this article, we propose a family of efficient kernels for large graphs with discrete node labels. Key to our method is a rapid feature extraction scheme based on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequence of graphs, whose node attributes capture topological and label information. A family of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly efficient kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of edges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classification benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale applications of graph kernels in various disciplines such as computational biology and social network analysis. [abs] [ pdf ][ bib ] &copy JMLR 2011. ( edit, beta )

MFCS Conference 2007 Invited Paper

Minimum Cycle Bases in Graphs Algorithms and Applications

  • Kurt Mehlhorn

Abstract A cycle basis of a graph is a family of cycles which spans all cycles of the graph. In an undirected graph, a cycle is simply a set of edges with respect to which every vertex has even degree. We view cycles as vectors indexed by edges. The entry for an edge is one if the edge belongs to the cycle and is zero otherwise. Addition of cycles corresponds to vector addition modulo 2 (symmetric difference of the underlying edge sets). In this way, the cycles of a graph form a vector space and a cycle basis is simply a basis of this vector space. The notion for directed graphs is slightly more involved.

MFCS Conference 2003 Conference Paper

Smoothed Analysis of Three Combinatorial Problems

  • Cyril Banderier
  • René Beier
  • Kurt Mehlhorn

Abstract Smoothed analysis combines elements over worst-case and average case analysis. For an instance x, the smoothed complexity is the average complexity of an instance obtained from x by a perturbation. The smoothed complexity of a problem is the worst smoothed complexity of any instance. Spielman and Teng introduced this notion for continuous problems. We apply the concept to combinatorial problems and study the smoothed complexity of three classical discrete problems: quicksort, left-to-right maxima counting, and shortest paths.

MFCS Conference 1998 Conference Paper

A Parallelization of Dijkstra's Shortest Path Algorithm

  • Andreas Crauser
  • Kurt Mehlhorn
  • Ulrich Meyer 0001
  • Peter Sanders 0001

Abstract The single source shortest path (SSSP) problem lacks parallel solutions which are fast and simultaneously work-efficient. We propose simple criteria which divide Dijkstra's sequential SSSP algorithm into a number of phases, such that the operations within a phase can be done in parallel. We give a PRAM algorithm based on these criteria and analyze its performance on random digraphs with random edge weights uniformly distributed in [0, 1]. We use the G (n, d/n) model: the graph consists of n nodes and each edge is chosen with probability d/n. Our PRAM algorithm needs O(n 1/3 log n ) log n ) time and O (n log n+dn) work with high probability (whp). We also give extensions to external memory computation. Simulations show the applicability of our approach even on non-random graphs.

MFCS Conference 1989 Invited Paper

LEDA: A Library of Efficient Data Types and Algorithms

  • Kurt Mehlhorn
  • Stefan Näher

Abstract LEDA is a library of efficient data types and algorithms. At present, its strength is graph algorithms and the data structures related to them. The computational geometry part is evolving. The main features of the library are 1) a clear separation of specification and implementation 2) parameterized data types 3) a comfortable data type graph, and 4) its ease of use.

FOCS Conference 1989 Conference Paper

On the Complexity of a Game Related to the Dictionary Problem

  • Kurt Mehlhorn
  • Stefan Näher
  • Monika Henzinger

A game on trees, which is related to the dictionary problem is considered. There are two players A and B who take turns. Player A models the user of the dictionary, and player B models its implementation. Player A modifies the tree by adding new leaves, and player B modifies the tree by replacing subtrees. The cost of an insertion is the depth of the new leaf, and the cost of an update is the size of the subtree replaced. The goal of player A is to maximize cost, and the goal of B is to minimize it. It is shown that there is a strategy for player A that forces a cost of Omega (n log log n) for an n-game, that is, a game consisting of n turns of both players, and a strategy for player B that keeps the cost in O(n log log n). >

FOCS Conference 1988 Conference Paper

Dynamic Perfect Hashing: Upper and Lower Bounds

  • Martin Dietzfelbinger
  • Anna R. Karlin
  • Kurt Mehlhorn
  • Friedhelm Meyer auf der Heide
  • Hans Rohnert
  • Robert Endre Tarjan

A randomized algorithm is given for the dictionary problem with O(1) worst-case time for lookup and O(1) amortized expected time for insertion and deletion. An Omega (log n) lower bound is proved for the amortized worst-case time complexity of any deterministic algorithm in a class of algorithms encompassing realistic hashing-based schemes. If the worst-case lookup time is restricted to k, then the lower bound for insertion becomes Omega (kn/sup 1/k/). >

MFCS Conference 1986 Conference Paper

Deterministic Simulation of Idealized Parallel Computers on More Realistic Ones

  • Helmut Alt
  • Torben Hagerup
  • Kurt Mehlhorn
  • Franco P. Preparata

Abstract We describe a deterministic simulation of PRAMs on module parallel computers (MPCs) and on processor networks of bounded degree. The simulating machines have the same number n of processors as the simulated PRAM, and if the size of the PRAM's shared memory is polynomial in n, each PRAM step is simulated by O (log n ) MPC steps or by O ((log n ) 2 ) steps of the bounded degree network. This improves upon a previous result by Upfal and Wigderson. We also prove an Ω((log n ) 2 /log log n ) lower bound on the number of steps needed to simulate one PRAM step on a bounded degree network under the assumption that the communication in the network is point-to-point.

FOCS Conference 1982 Conference Paper

On the Program Size of Perfect and Universal Hash Functions

  • Kurt Mehlhorn

We address the question of program size of of perfect and universal hash functions. We prove matching upper and lower bounds (up to constant factors) on program size. Furthermore, we show that minimum or nearly minimum size programs can be found efficiently. In addition, these (near) minimum size programs have time complexity at most O(log* N) where N is the size of the universe in the case of perfect hashing, and time complexity 0(1) in the case of universal hashing. Thus for universal hashing programs of minimal size and minimal time complexity have been found.

MFCS Conference 1979 Conference Paper

Some Remarks on Boolean Sums

  • Kurt Mehlhorn

Abstract Neciporuk, Lamagna/Savage and Tarjan determined the monotone network complexity of a set of Boolean sums if any two sums have at most one variable in common. Wegener then solved the case that any two sums have at most k variables in common. We extend his methods and results and consider the case that any set of h+1 distinct sums have at most k variables in common. We use our general results to explicitly construct a set of n Boolean sums over n variables whose monotone complexity is of order n 5/3. The best previously known bound was of order n 3/2. Related results were obtained independently by Pippenger.

MFCS Conference 1975 Conference Paper

Monotone Switching Circuits and Boolean Matrix Product

  • Kurt Mehlhorn
  • Zvi Galil

Abstract We explore the concept of local transformations of monotone switching circuits, i. e. what kind of local changes in a circuit leave the functions computed by the circuit invariant. We obtain several general theorems in this direction. We apply these results to boolean matrix product and prove that the school-method for matrix multiplication yields the unique monotone circuit.