Arrow Research search

Author name cluster

Avinatan Hassidim

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

31 papers
2 author rows

Possible papers

31

AAAI Conference 2025 Conference Paper

Reducing Leximin Fairness to Utilitarian Optimization

  • Eden Hartman
  • Yonatan Aumann
  • Avinatan Hassidim
  • Erel Segal-Halevi

Two prominent objectives in social choice are utilitarian - maximizing the sum of agents' utilities, and leximin - maximizing the smallest agent's utility, then the second-smallest, etc. Utilitarianism is typically computationally easier to attain but is generally viewed as less fair. This paper presents a general reduction scheme that, given a utilitarian solver, produces a distribution over states (deterministic outcomes) that is leximin in expectation. Importantly, the scheme is robust in the sense that, given an approximate utilitarian solver, it produces a lottery that is approximately-leximin (in expectation) - with the same approximation factor. We apply our scheme to several social choice problems: stochastic allocations of indivisible goods, giveaway lotteries, and fair lotteries for participatory budgeting.

NeurIPS Conference 2024 Conference Paper

Multi-turn Reinforcement Learning with Preference Human Feedback

  • Lior Shani
  • Aviv Rosenberg
  • Asaf Cassel
  • Oran Lang
  • Daniele Calandriello
  • Avital Zipori
  • Hila Noga
  • Orgad Keller

Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning Large Language Models (LLMs) with human preferences, allowing LLMs to demonstrate remarkable abilities in various tasks. Existing methods work by emulating the human preference at the single decision (turn) level, limiting their capabilities in settings that require planning or multi-turn interactions to achieve a long-term goal. In this paper, we address this issue by developing novel methods for Reinforcement Learning (RL) from preference feedback between two full multi-turn conversations. In the tabular setting, we present a novel mirror-descent-based policy optimization algorithm for the general multi-turn preference-based RL problem, and prove its convergence to Nash equilibrium. To evaluate performance, we create a new environment, Education Dialogue, where a teacher agent guides a student in learning a random topic, and show that a deep RL variant of our algorithm outperforms RLHF baselines. Finally, we show that in an environment with explicit rewards, our algorithm recovers the same performance as a reward-based RL baseline, despite relying solely on a weaker preference signal.

ECAI Conference 2023 Conference Paper

Leximin Approximation: From Single-Objective to Multi-Objective

  • Eden Hartman
  • Avinatan Hassidim
  • Yonatan Aumann
  • Erel Segal-Halevi

Leximin is a common approach to multi-objective optimization, frequently employed in fair division applications. In leximin optimization, one first aims to maximize the smallest objective value; subject to this, one maximizes the second-smallest objective; and so on. Often, even the single-objective problem of maximizing the smallest value cannot be solved accurately. What can we hope to accomplish for leximin optimization in this situation? Recently, Henzinger et al. (2022) defined a notion of approximate leximin optimality. Their definition, however, considers only an additive approximation. In this work, we first define the notion of approximate leximin optimality, allowing both multiplicative and additive errors. We then show how to compute, in polynomial time, such an approximate leximin solution, using an oracle that finds an approximation to a single-objective problem. The approximation factors of the algorithms are closely related: an (α, ϵ)-approximation for the single-objective problem (where α ∈ (0, 1] and ϵ ≥ 0 are the multiplicative and additive factors respectively) translates into an (α2/(1 − α + α2), ϵ/(1 − α + α2))-approximation for the multi-objective leximin problem, regardless of the number of objectives. Finally, we apply our algorithm to obtain an approximate leximin solution for the problem of stochastic allocations of indivisible goods.

NeurIPS Conference 2021 Conference Paper

Adversarial Robustness of Streaming Algorithms through Importance Sampling

  • Vladimir Braverman
  • Avinatan Hassidim
  • Yossi Matias
  • Mariano Schain
  • Sandeep Silwal
  • Samson Zhou

Robustness against adversarial attacks has recently been at the forefront of algorithmic design for machine learning tasks. In the adversarial streaming model, an adversary gives an algorithm a sequence of adaptively chosen updates $u_1, \ldots, u_n$ as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. In this paper, we introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks, such as regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model. Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness which is in contrast to sketching based algorithms, which are very prevalent in the streaming literature but suffer from adversarial attacks. In addition, we show that the well-known merge and reduce paradigm in streaming is adversarially robust. Since the merge and reduce paradigm allows coreset constructions in the streaming setting, we thus obtain robust algorithms for $k$-means, $k$-median, $k$-center, Bregman clustering, projective clustering, principal component analysis (PCA) and non-negative matrix factorization. To the best of our knowledge, these are the first adversarially robust results for these problems yet require no new algorithmic implementations. Finally, we empirically confirm the robustness of our algorithms on various adversarial attacks and demonstrate that by contrast, some common existing algorithms are not robust.

AAAI Conference 2021 Conference Paper

Targeted Negative Campaigning: Complexity and Approximations

  • ‪Avishai Zagoury‬‏
  • Orgad Keller
  • Avinatan Hassidim
  • Noam Hazon

Given the ubiquity of negative campaigning in recent political elections, we find it important to study its properties from a computational perspective. To this end, we present a model where elections can be manipulated by convincing voters to demote specific non-favored candidates, and study its properties in the classic setting of scoring rules. When the goal is constructive (making a preferred candidate win), we prove that finding such a demotion strategy is easy for Plurality and Veto, while generally hard for t-approval and Borda. We also provide a t-factor approximation for tapproval for every fixed t, and a 3-factor approximation algorithm for Borda. Interestingly enough—following recent trends in political science that show that the effectiveness of negative campaigning depends on the type of candidate and demographic—when assigning varying prices to different possible demotion operations, we are able to provide inapproximability results. When the goal is destructive (making the leading opponent lose), we show that the problem is easy for a broad class of scoring rules.

NeurIPS Conference 2020 Conference Paper

An Optimal Elimination Algorithm for Learning a Best Arm

  • Avinatan Hassidim
  • Ron Kupfer
  • Yaron Singer

We consider the classic problem of $(\epsilon, \delta)$-\texttt{PAC} learning a best arm where the goal is to identify with confidence $1-\delta$ an arm whose mean is an $\epsilon$-approximation to that of the highest mean arm in a multi-armed bandit setting. This problem is one of the most fundamental problems in statistics and learning theory, yet somewhat surprisingly its worst case sample complexity is not well understood. In this paper we propose a new approach for $(\epsilon, \delta)$-\texttt{PAC} learning a best arm. This approach leads to an algorithm whose sample complexity converges to \emph{exactly} the optimal sample complexity of $(\epsilon, \delta)$-learning the mean of $n$ arms separately and we complement this result with a conditional matching lower bound. More specifically: \begin{itemize} \item The algorithm's sample complexity converges to \emph{exactly} $\frac{n}{2\epsilon^2}\log \frac{1}{\delta}$ as $n$ grows and $\delta \geq \frac{1}{n}$; % \item We prove that no elimination algorithm obtains sample complexity arbitrarily lower than $\frac{n}{2\epsilon^2}\log \frac{1}{\delta}$. Elimination algorithms is a broad class of $(\epsilon, \delta)$-\texttt{PAC} best arm learning algorithms that includes many algorithms in the literature. \end{itemize} When $n$ is independent of $\delta$ our approach yields an algorithm whose sample complexity converges to $\frac{2n}{\epsilon^2} \log \frac{1}{\delta}$ as $n$ grows. In comparison with the best known algorithm for this problem our approach improves the sample complexity by a factor of over 1500 and over 6000 when $\delta\geq \frac{1}{n}$.

JAIR Journal 2020 Journal Article

Fair Allocation with Diminishing Differences

  • Erel Segal-Halevi
  • Avinatan Hassidim
  • Haris Aziz

Ranking alternatives is a natural way for humans to explain their preferences. It is used in many settings, such as school choice, course allocations and residency matches. Without having any information on the underlying cardinal utilities, arguing about the fairness of allocations requires extending the ordinal item ranking to ordinal bundle ranking. The most commonly used such extension is stochastic dominance (SD), where a bundle X is preferred over a bundle Y if its score is better according to all additive score functions. SD is a very conservative extension, by which few allocations are necessarily fair while many allocations are possibly fair. We propose to make a natural assumption on the underlying cardinal utilities of the players, namely that the difference between two items at the top is larger than the difference between two items at the bottom. This assumption implies a preference extension which we call diminishing differences (DD), where X is preferred over Y if its score is better according to all additive score functions satisfying the DD assumption. We give a full characterization of allocations that are necessarily-proportional or possibly-proportional according to this assumption. Based on this characterization, we present a polynomial-time algorithm for finding a necessarily-DD-proportional allocation whenever it exists. Using simulations, we compare the various fairness criteria in terms of their probability of existence, and their probability of being fair by the underlying cardinal valuations. We find that necessary-DD-proportionality fares well in both measures. We also consider envy-freeness and Pareto optimality under diminishing-differences, as well as chore allocation under the analogous condition --- increasing-differences.

JAIR Journal 2019 Journal Article

Approximating Weighted and Priced Bribery in Scoring Rules

  • Orgad Keller
  • Avinatan Hassidim
  • Noam Hazon

The classic Bribery problem is to find a minimal subset of voters who need to change their vote to make some preferred candidate win. Its important generalizations consider voters who are weighted and also have different prices. We provide an approximate solution for these problems for a broad family of scoring rules (which includes Borda, t-approval, and Dowdall).Our algorithm is based on a randomized reduction from these Bribery generalizations to weighted coalitional manipulation (WCM). To solve this WCM instance, we apply the Birkhoff-von Neumann (BvN) decomposition to a fractional manipulation matrix. This allows us to limit the size of the possible ballot search space reducing it from exponential to polynomial, while still obtaining good approximation guarantees. Finding a solution in the truncated search space yields a new algorithm for WCM, which is of independent interest.

NeurIPS Conference 2019 Conference Paper

Learning to Screen

  • Alon Cohen
  • Avinatan Hassidim
  • Haim Kaplan
  • Yishay Mansour
  • Shay Moran

Imagine a large firm with multiple departments that plans a large recruitment. Candidates arrive one-by-one, and for each candidate the firm decides, based on her data (CV, skills, experience, etc), whether to summon her for an interview. The firm wants to recruit the best candidates while minimizing the number of interviews. We model such scenarios as an assignment problem between items (candidates) and categories (departments): the items arrive one-by-one in an online manner, and upon processing each item the algorithm decides, based on its value and the categories it can be matched with, whether to retain or discard it (this decision is irrevocable). The goal is to retain as few items as possible while guaranteeing that the set of retained items contains an optimal matching. We consider two variants of this problem: (i) in the first variant it is assumed that the $n$ items are drawn independently from an unknown distribution $D$. (ii) In the second variant it is assumed that before the process starts, the algorithm has an access to a training set of $n$ items drawn independently from the same unknown distribution (e. g. \ data of candidates from previous recruitment seasons). We give tight bounds on the minimum possible number of retained items in each of these variants. These results demonstrate that one can retain exponentially less items in the second variant (with the training set). Our algorithms and analysis utilize ideas and techniques from statistical learning theory and from discrete algorithms.

JAIR Journal 2019 Journal Article

New Approximations for Coalitional Manipulation in Scoring Rules

  • Orgad Keller
  • Avinatan Hassidim
  • Noam Hazon

We study the problem of coalitional manipulation---where k manipulators try to manipulate an election on m candidates---for any scoring rule, with focus on the Borda protocol. We do so in both the weighted and unweighted settings. For these problems, recent approximation approaches have tried to minimize k, the number of manipulators needed to make some preferred candidate p win (thus assuming that the number of manipulators is not limited in advance). In contrast, we focus on minimizing the score margin of p which is the difference between the maximum score of a candidate and the score of p. We provide algorithms that approximate the optimum score margin, which are applicable to any scoring rule. For the specific case of the Borda protocol in the unweighted setting, our algorithm provides a superior approximation factor for lower values of k.Our methods are novel and adapt techniques from multiprocessor scheduling by carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle. We believe that such methods could be beneficial in other social choice settings as well.

ICML Conference 2019 Conference Paper

Self-similar Epochs: Value in arrangement

  • Eliav Buchnik
  • Edith Cohen
  • Avinatan Hassidim
  • Yossi Matias

Optimization of machine learning models is commonly performed through stochastic gradient updates on randomly ordered training examples. This practice means that each fraction of an epoch comprises an independent random sample of the training data that may not preserve informative structure present in the full data. We hypothesize that the training can be more effective with self-similar arrangements that potentially allow each epoch to provide benefits of multiple ones. We study this for “matrix factorization” – the common task of learning metric embeddings of entities such as queries, videos, or words from example pairwise associations. We construct arrangements that preserve the weighted Jaccard similarities of rows and columns and experimentally observe training acceleration of 3%-37% on synthetic and recommendation datasets. Principled arrangements of training examples emerge as a novel and potentially powerful enhancement to SGD that merits further exploration.

AAAI Conference 2018 Conference Paper

Approximating Bribery in Scoring Rules

  • Orgad Keller
  • Avinatan Hassidim
  • Noam Hazon

The classic bribery problem is to find a minimal subset of voters who need to change their vote to make some preferred candidate win. We find an approximate solution for this problem for a broad family of scoring rules (which includes Borda and t-approval), in the following sense: if there is a strategy which requires bribing k voters, we efficiently find a strategy which requires bribing at most k + O( √ k) voters. Our algorithm is based on a randomized reduction from bribery to coalitional manipulation (UCM). To solve the UCM problem, we apply the Birkhoff-von Neumann (BvN) decomposition to a fractional manipulation matrix. This allows us to limit the size of the possible ballot search space reducing it from exponential to polynomial, while still obtaining good approximation guarantees. Finding the optimal solution in the truncated search space yields a new algorithm for UCM, which is of independent interest.

IJCAI Conference 2018 Conference Paper

Double Auctions in Markets for Multiple Kinds of Goods

  • Erel Segal-Halevi
  • Avinatan Hassidim
  • Yonatan Aumann

Motivated by applications such as stock exchanges and spectrum auctions, there is a growing interest in mechanisms for arranging trade in two-sided markets. However, existing mechanisms are either not truthful, do not guarantee an asymptotically-optimal gain-from-trade, rely on a prior on the traders' valuations, or operate in limited settings such as a single type of good. We extend the random-sampling technique used in earlier works to multi-good markets where traders have gross-substitute valuations. We show a prior free, truthful and strongly-budget-balanced mechanism which guarantees near-optimal gain from trade when the market sizes of all goods grow to infinity at a similar rate.

AAAI Conference 2018 Conference Paper

MUDA: A Truthful Multi-Unit Double-Auction Mechanism

  • Erel Segal-Halevi
  • Avinatan Hassidim
  • Yonatan Aumann

In a seminal paper, McAfee (1992) presented a truthful mechanism for double auctions, attaining asymptoticallyoptimal gain-from-trade without any prior information on the valuations of the traders. McAfee’s mechanism handles single-parametric agents, allowing each seller to sell a single unit and each buyer to buy a single unit. This paper presents a double-auction mechanism that handles multi-parametric agents and allows multiple units per trader, as long as the valuation functions of all traders have decreasing marginal returns. The mechanism is prior-free, ex-post individually-rational, dominant-strategy truthful and strongly-budget-balanced. Its gain-from-trade approaches the optimum when the market size is sufficiently large.

ICML Conference 2018 Conference Paper

Online Linear Quadratic Control

  • Alon Cohen
  • Avinatan Hassidim
  • Tomer Koren
  • Nevena Lazic
  • Yishay Mansour
  • Kunal Talwar

We study the problem of controlling linear time-invariant systems with known noisy dynamics and adversarially chosen quadratic losses. We present the first efficient online learning algorithms in this setting that guarantee $O(\sqrt{T})$ regret under mild assumptions, where $T$ is the time horizon. Our algorithms rely on a novel SDP relaxation for the steady-state distribution of the system. Crucially, and in contrast to previously proposed relaxations, the feasible solutions of our SDP all correspond to “strongly stable” policies that mix exponentially fast to a steady state.

NeurIPS Conference 2018 Conference Paper

Optimization for Approximate Submodularity

  • Yaron Singer
  • Avinatan Hassidim

We consider the problem of maximizing a submodular function when given access to its approximate version. Submodular functions are heavily studied in a wide variety of disciplines, since they are used to model many real world phenomena, and are amenable to optimization. However, there are many cases in which the phenomena we observe is only approximately submodular and the approximation guarantees cease to hold. We describe a technique which we call the sampled mean approximation that yields strong guarantees for maximization of submodular functions from approximate surrogates under cardinality and intersection of matroid constraints. In particular, we show tight guarantees for maximization under a cardinality constraint and 1/(1+P) approximation under intersection of P matroids.

IJCAI Conference 2018 Conference Paper

Planning and Learning with Stochastic Action Sets

  • Craig Boutilier
  • Alon Cohen
  • Avinatan Hassidim
  • Yishay Mansour
  • Ofer Meshi
  • Martin Mladenov
  • Dale Schuurmans

In many practical uses of reinforcement learning (RL) the set of actions available at a given state is a random variable, with realizations governed by an exogenous stochastic process. Somewhat surprisingly, the foundations for such sequential decision processes have been unaddressed. In this work, we formalize and investigate MDPs with stochastic action sets (SAS-MDPs) to provide these foundations. We show that optimal policies and value functions in this model have a structure that admits a compact representation. From an RL perspective, we show that Q-learning with sampled action sets is sound. In model-based settings, we consider two important special cases: when individual actions are available with independent probabilities, and a sampling-based model for unknown distributions. We develop polynomial-time value and policy iteration methods for both cases, and provide a polynomial-time linear programming solution for the first case.

IJCAI Conference 2017 Conference Paper

Fair Allocation based on Diminishing Differences

  • Erel Segal-Halevi
  • Haris Aziz
  • Avinatan Hassidim

Ranking alternatives is a natural way for humans to explain their preferences. It is being used in many settings, such as school choice (NY, Boston), Course allocations, and the Israeli medical lottery. In some cases (such as the latter two), several ``items'' are given to each participant. Without having any information on the underlying cardinal utilities, arguing about fairness of allocation requires extending the ordinal item ranking to ordinal bundle ranking. The most commonly used such extension is stochastic dominance (SD), where a bundle X is preferred over a bundle Y if its score is better according to all additive score functions. SD is a very conservative extension, by which few allocations are necessarily fair while many allocations are possibly fair. We propose to make a natural assumption on the underlying cardinal utilities of the players, namely that the difference between two items at the top is larger than the difference between two items at the bottom. This assumption implies a preference extension which we call diminishing differences (DD), where a X is preferred over Y if its score is better according to all additive score functions satisfying the DD assumption. We give a full characterization of allocations that are necessarily-proportional or possibly-proportional according to this assumption. Based on this characterization, we present a polynomial-time algorithm for finding a necessarily-DD-proportional allocation if it exists. Using simulations, we show that with high probability, a necessarily-proportional allocation does not exist but a necessarily-DD-proportional allocation exists, and moreover, that allocation is proportional according to the underlying cardinal utilities.

AAMAS Conference 2017 Conference Paper

New Approximation for Borda Coalitional Manipulation

  • Orgad Keller
  • Avinatan Hassidim
  • Noam Hazon

We study the problem of Borda Unweighted Coalitional Manipulation, where k manipulators try to manipulate an election on m candidates under the Borda protocol. This problem is known to be NP-hard. While most recent approaches to approximation tried to minimize k, the number of manipulators needed to make the preferred candidate win (thus assuming that the number of manipulators is not limited in advance), we focus instead on minimizing the maximum score obtainable by a non-preferred candidate. We provide a randomized, additive O(k √ m log m) approximation to this value; in other words, if there exists a strategy enabling the preferred candidate to win by an Ω(k √ m log m) margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, where the addition of an extra manipulator implied (with respect to the original k) a strategy that provides an Ω(m)-additive approximation to a runnerup’s score: when k is o( p m/ log m), our strategy provides a stronger approximation. Our algorithm can also be viewed as a (1 + o(1))-multiplicative approximation since the value we approximate has a natural Ω(km) lower bound. Our methods are novel and adapt techniques from multiprocessor scheduling by carefully rounding an exponentiallylarge configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle. We believe that such methods could be beneficial in approximating coalitional manipulation in other election protocols as well. CCS Concepts •Computing methodologies → Multi-agent systems;

ICML Conference 2017 Conference Paper

Robust Guarantees of Stochastic Greedy Algorithms

  • Avinatan Hassidim
  • Yaron Singer

In this paper we analyze the robustness of stochastic variants of the greedy algorithm for submodular maximization. Our main result shows that for maximizing a monotone submodular function under a cardinality constraint, iteratively selecting an element whose marginal contribution is approximately maximal in expectation is a sufficient condition to obtain the optimal approximation guarantee with exponentially high probability, assuming the cardinality is sufficiently large. One consequence of our result is that the linear-time STOCHASTIC-GREEDY algorithm recently proposed in (Mirzasoleiman et al. ,2015) achieves the optimal running time while maintaining an optimal approximation guarantee. We also show that high probability guarantees cannot be obtained for stochastic greedy algorithms under matroid constraints, and prove an approximation guarantee which holds in expectation. In contrast to the guarantees of the greedy algorithm, we show that the approximation ratio of stochastic local search is arbitrarily bad, with high probability, as well as in expectation.

AAAI Conference 2015 Conference Paper

Envy-Free Cake-Cutting in Two Dimensions

  • Erel Segal-Halevi
  • Avinatan Hassidim
  • Yonatan Aumann

We consider the problem of fair division of a two dimensional heterogeneous good among several agents. Applications include division of land as well as ad space in print and electronic media. Classical cake cutting protocols either consider a one-dimensional resource, or allocate each agent several disconnected pieces. In practice, however, the two dimensional shape of the allotted piece is of crucial importance in many applications, e. g. , squares or bounded aspectratio rectangles are most useful for building houses as well as advertisements. We thus introduce and study the problem of envy-free two-dimensional division wherein the utility of the agents depends on the geometric shape of the allocated pieces (as well as the location and size). In addition to envyfreeness, we require that the fraction allocated to each agent be at least a certain constant that depends only on the shape of the cake and the number of agents. We focus on the case where the allotted pieces must be square and the cakes are either squares or the unbounded plane. We provide algorithms for the problem for settings with two and three agents.

IJCAI Conference 2015 Conference Paper

Implementing the Wisdom of Waze

  • Shoshana Vasserman
  • Michal Feldman
  • Avinatan Hassidim

We study a setting of non-atomic routing in a network of m parallel links with asymmetry of information. While a central entity (such as a GPS navigation system) — a mediator hereafter — knows the cost functions associated with the links, they are unknown to the individual agents controlling the flow. The mediator gives incentive compatible recommendations to agents, trying to minimize the total travel time. Can the mediator do better than when agents minimize their travel time selfishly without coercing agents to follow his recommendations? We study the mediation ratio: the ratio between the mediated equilibrium obtained from an incentive compatible mediation protocol and the social optimum. We find that mediation protocols can reduce the efficiency loss compared to the full revelation alternative, and compared to the non mediated Nash equilibrium. In particular, in the case of two links with affine cost functions, the mediation ratio is at most 8/7, and remains strictly smaller than the price of anarchy of 4/3 for any fixed m. Yet, it approaches the price of anarchy as m grows. For general (monotone) cost functions, the mediation ratio is at most m, a significant improvement over the unbounded price of anarchy.

JAAMAS Journal 2015 Journal Article

Negotiation in exploration-based environment

  • Israel Sofer
  • David Sarne
  • Avinatan Hassidim

Abstract When two parties need to split some reward between them, negotiation theory can predict what offers the parties will make and how the reward will be split. When a single party needs to evaluate several alternatives and choose the best among them, optimal-stopping-rule theories guide it as to how to perform the exploration, what to explore next and when to stop. We consider a model in which party A needs to choose one alternative, but has no information and no means of acquiring information on the value of each alternative. Party B, on the other hand, has no interest in what party A chooses, but can perform (costly) exploration to learn about the different alternatives. As both negotiation and exploration take time, the common deadline and discounting factor further tie the processes together. We study the combined model, providing a comprehensive game theoretic based analysis, enabling the extraction of the payments that need to be made between agents A and B, and the social welfare. Special emphasis is placed on studying the effect of interleaving negotiation and exploration, and when is this method preferred over separating the two. In addition to exploring the basic questions, we also consider the case in which one of the parties has some control over the parameters of the problem (e. g. the negotiation protocol), and show how it increases the utility of this party but decreases the overall welfare.

AAAI Conference 2015 Conference Paper

Strategy-Proof and Efficient Kidney Exchange Using a Credit Mechanism

  • Chen Hajaj
  • John Dickerson
  • Avinatan Hassidim
  • Tuomas Sandholm
  • David Sarne

We present a credit-based matching mechanism for dynamic barter markets—and kidney exchange in particular—that is both strategy proof and efficient, that is, it guarantees truthful disclosure of donor-patient pairs from the transplant centers and results in the maximum global matching. Furthermore, the mechanism is individually rational in the sense that, in the long run, it guarantees each transplant center more matches than the center could have achieved alone. The mechanism does not require assumptions about the underlying distribution of compatibility graphs—a nuance that has previously produced conflicting results in other aspects of theoretical kidney exchange. Our results apply not only to matching via 2-cycles: the matchings can also include cycles of any length and altruist-initiated chains, which is important at least in kidney exchanges. The mechanism can also be adjusted to guarantee immediate individual rationality at the expense of economic efficiency, while preserving strategy proofness via the credits. This circumvents a well-known impossibility result in static kidney exchange concerning the existence of an individually rational, strategy-proof, and maximal mechanism. We show empirically that the mechanism results in significant gains on data from a national kidney exchange that includes 59% of all US transplant centers.

AAAI Conference 2012 Conference Paper

Negotiation in Exploration-Based Environment

  • Israel Sofer
  • David Sarne
  • Avinatan Hassidim

This paper studies repetitive negotiation over the execution of an exploration process between two selfinterested, fully rational agents in a full information environment with side payments. A key aspect of the protocol is that the exploration’s execution may interleave with the negotiation itself, inflicting some degradation on the exploration’s flexibility. The advantage of this form of negotiation is in enabling the agents supervising that the exploration’s execution takes place in its agreed form as negotiated. We show that in many cases, much of the computational complexity of the new protocol can be eliminated by solving an alternative negotiation scheme according to which the parties first negotiate the exploration terms as a whole and then execute it. As demonstrated in the paper, the solution characteristics of the new protocol are somehow different from those of legacy negotiation protocols where the execution of the agreement reached through the negotiation is completely separated from the negotiation process. Furthermore, if the agents are given the option to control some of the negotiation protocol parameters, the resulting exploration may be suboptimal. In particular we show that the increase in an agent’s expected utility in such cases is unbounded and so is the resulting decrease in the social welfare. Surprisingly, we show that further increasing one of the agents’ level of control in some of the negotiation parameters enables bounding the resulting decrease in the social welfare.

FOCS Conference 2009 Conference Paper

Local Graph Partitions for Approximation and Testing

  • Avinatan Hassidim
  • Jonathan A. Kelner
  • Huy N. Nguyen
  • Krzysztof Onak

We introduce a new tool for approximation and testing algorithms called partitioning oracles. We develop methods for constructing them for any class of bounded-degree graphs with an excluded minor, and in general, for any hyperfinite class of bounded-degree graphs. These oracles utilize only local computation to consistently answer queries about a global partition that breaks the graph into small connected components by removing only a small fraction of the edges. We illustrate the power of this technique by using it to extend and simplify a number of previous approximation and testing results for sparse graphs, as well as to provide new results that were unachievable with existing techniques. For instance: 1. We give constant-time approximation algorithms for the size of the minimum vertex cover, the minimum dominating set, and the maximum independent set for any class of graphs with an excluded minor. 2. We show a simple proof that any minor-closed graph property is testable in constant time in the bounded degree model. 3. We prove that it is possible to approximate the distance to almost any hereditary property in any bounded degree hereditary families of graphs. Hereditary properties of interest include bipartiteness, k-colorability, and perfectness.

FOCS Conference 2008 Conference Paper

Broadcasting with Side Information

  • Noga Alon
  • Eyal Lubetzky
  • Uri Stav
  • Amit Weinstein
  • Avinatan Hassidim

A sender holds a word x consisting of n blocks x i, each of t bits, and wishes to broadcast a codeword to m receivers, R 1, .. ., R m. Each receiver R i is interested in one block, and has prior side information consisting of some subset of the other blocks. Let beta t be the minimum number of bits that has to be transmitted when each block is of length t, and let beta be the limit beta=lim trarrinfin beta t /t. Informally, beta is the average communication cost per bit in each block (for long blocks). Finding the coding rate beta, for such an informed broadcast setting, generalizes several coding theoretic parameters related to Informed Source Coding on Demand, Index Coding and Network Coding. In this work we show that usage of large data blocks may strictly improve upon the trivial encoding which treats each bit in the block independently. To this end, we provide general bounds on beta t, and prove that for any constant C there is an explicit broadcast setting in which beta = 2 but beta 1 > C. One of these examples answers a question of. In addition, we provide examples with the following counterintuitive direct-sum phenomena. Consider a union of several mutually independent broadcast settings. The optimal code for the combined setting may yield a significant saving in communication over concatenating optimal encodings for the individual settings. This result also provides new non-linear coding schemes which improve upon the largest known gap between linear and non-linear Network Coding, thus improving the results of. The proofs are based on a relation between this problem and results in the study of Witsenhausen's rate, OR graph products, colorings of Cayley graphs, and the chromatic numbers of Kneser graphs.

FOCS Conference 2008 Conference Paper

Quantum Multi Prover Interactive Proofs with Communicating Provers

  • Michael Ben-Or
  • Avinatan Hassidim
  • Haran Pilpel

We introduce another variant of quantum MIP, where the provers do not share entanglement, the communication between the verifier and the provers is quantum, but the provers are unlimited in the classical communication between them. At first, this model may seem very weak, as provers who exchange information seem to be equivalent in power to a simple prover. This in fact is not the case-we show that any language in NEXP can be recognized in this model efficiently, with just two provers and two rounds of communication, with a constant completeness-soundness gap. Similar ideas and techniques may help help with other models of quantum MIP, including the dual question, of non communicating provers with unlimited entanglement.

FOCS Conference 2008 Conference Paper

The Bayesian Learner is Optimal for Noisy Binary Search (and Pretty Good for Quantum as Well)

  • Michael Ben-Or
  • Avinatan Hassidim

We use a Bayesian approach to optimally solve problems in noisy binary search. We deal with two variants: 1. Each comparison is erroneous with independent probability 1-p. 2. At each stage k comparisons can be performed in parallel and a noisy answer is returned. We present a (classical) algorithm which solves both variants optimally (with respect to p and k), up to an additive term of O(loglog n), and prove matching information-theoretic lower bounds. We use the algorithm to improve the results of Farhi et al. , presenting an exact quantum search algorithm in an ordered list of expected complexity less than (log 2 n)/3.

FOCS Conference 2006 Conference Paper

Secure Multiparty Quantum Computation with (Only) a Strict Honest Majority

  • Michael Ben-Or
  • Claude Crépeau
  • Daniel Gottesman
  • Avinatan Hassidim
  • Adam Smith 0006

Secret sharing and multiparty computation (also called "secure function evaluation") are fundamental primitives in modern cryptography, allowing a group of mutually distrustful players to perform correct, distributed computations under the sole assumption that some number of them will follow the protocol honestly. This paper investigates how much trust is necessary -- that is, how many players must remain honest -- in order for distributed quantum computations to be possible. We present a verifiable quantum secret sharing (VQSS) protocol, and a general secure multiparty quantum computation (MPQC) protocol, which can tolerate any \left[ {\frac{{n - 1}} {2}} \right] cheaters among n players. Previous protocols for these tasks tolerated \left[ {\frac{{n - 1}} {4}} \right] and \left[ {\frac{{n - 1}} {6}} \right] cheaters, respectively. The threshold we achieve is tight -- even in the classical case, "fair" multiparty computation is not possible if any set of n/2 players can cheat. Our protocols rely on approximate quantum errorcorrecting codes, which can tolerate a larger fraction of errors than traditional, exact codes. We introduce new families of authentication schemes and approximate codes tailored to the needs of our protocols, as well as new state purification techniques along the lines of those used in faulttolerant quantum circuits.