Arrow Research search

Author name cluster

Ruben Becker

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
2 author rows

Possible papers

9

AAAI Conference 2026 Conference Paper

Greedily Maximizing Ex-Ante Fairness

  • Ruben Becker
  • Bojana Kodric
  • Cosimo Vinci

We study a general framework of optimization with the aim to compute fair solutions in settings with a set of agents whose valuations are combined using an aggregation function. The strength of our framework lies (1) in its generality and (2) in the fact that we leverage the power of ex-ante fairness, a concept that has recently gained much attention in the scope of fair allocation and fairness in AI in general. More precisely, in our setting there are n set functions f₁, …, fₙ (e.g., the valuation functions of n agents) that are combined using an aggregation function g (e.g., the minimum, Nash social welfare, p-norm). The power of ex-ante fairness is obtained by allowing as a feasible solution not simply a finite set S, but instead a distribution Π over feasible sets. The goal in our setting is then to find a probability distribution p in Π that maximizes the value resulting from aggregating (using g) the n expected values of the functions f₁, …, fₙ obtained when sampling a set S according to the distribution p. We stress that this is different from maximizing the expected value of g (ex-post fairness) and typically allows for much fairer solutions. We give three different greedy algorithms for three different settings of this framework and prove that they achieve constant approximation guarantees under certain realistic assumptions. For some of the settings, we show that these approximation guarantees are tight. Specific scenarios that can be modelled using our framework include fair information diffusion in social networks, fair submodular matching problems, and ex-ante versions of item assignment problems.

AAAI Conference 2023 Conference Paper

Improving Fairness in Information Exposure by Adding Links

  • Ruben Becker
  • Gianlorenzo D'Angelo
  • Sajjad Ghobadi

Fairness in influence maximization has been a very active research topic recently. Most works in this context study the question of how to find seeding strategies (deterministic or probabilistic) such that nodes or communities in the network get their fair share of coverage. Different fairness criteria have been used in this context. All these works assume that the entity that is spreading the information has an inherent interest in spreading the information fairly, otherwise why would they want to use the developed fair algorithms? This assumption may however be flawed in reality -- the spreading entity may be purely efficiency-oriented. In this paper we propose to study two optimization problems with the goal to modify the network structure by adding links in such a way that efficiency-oriented information spreading becomes automatically fair. We study the proposed optimization problems both from a theoretical and experimental perspective, that is, we give several hardness and hardness of approximation results, provide efficient algorithms for some special cases, and more importantly provide heuristics for solving one of the problems in practice. In our experimental study we then first compare the proposed heuristics against each other and establish the most successful one. In a second experiment, we then show that our approach can be very successful in practice. That is, we show that already after adding a few edges to the networks the greedy algorithm that purely maximizes spread surpasses all fairness-tailored algorithms in terms of ex-post fairness. Maybe surprisingly, we even show that our approach achieves ex-post fairness values that are comparable or even better than the ex-ante fairness values of the currently most efficient algorithms that optimize ex-ante fairness.

AAAI Conference 2023 Conference Paper

On the Cost of Demographic Parity in Influence Maximization

  • Ruben Becker
  • Gianlorenzo D'Angelo
  • Sajjad Ghobadi

Modeling and shaping how information spreads through a network is a major research topic in network analysis. While initially the focus has been mostly on efficiency, recently fairness criteria have been taken into account in this setting. Most work has focused on the maximin criteria however, and thus still different groups can receive very different shares of information. In this work we propose to consider fairness as a notion to be guaranteed by an algorithm rather than as a criterion to be maximized. To this end, we propose three optimization problems that aim at maximizing the overall spread while enforcing strict levels of demographic parity fairness via constraints (either ex-post or ex-ante). The level of fairness hence becomes a user choice rather than a property to be observed upon output. We study this setting from various perspectives. First, we prove that the cost of introducing demographic parity can be high in terms of both overall spread and computational complexity, i.e., the price of fairness may be unbounded for all three problems and optimal solutions are hard to compute, in some case even approximately or when fairness constraints may be violated. For one of our problems, we still design an algorithm with both constant approximation factor and fairness violation. We also give two heuristics that allow the user to choose the tolerated fairness violation. By means of an extensive experimental study, we show that our algorithms perform well in practice, that is, they achieve the best demographic parity fairness values. For certain instances we additionally even obtain an overall spread comparable to the most efficient algorithms that come without any fairness guarantee, indicating that the empirical price of fairness may actually be small when using our algorithms.

JAIR Journal 2022 Journal Article

Fairness in Influence Maximization through Randomization

  • Ruben Becker
  • Gianlorenzo D’Angelo
  • Sajjad Ghobadi
  • Hugo Gilbert

The influence maximization paradigm has been used by researchers in various fields in order to study how information spreads in social networks. While previously the attention was mostly on efficiency, more recently fairness issues have been taken into account in this scope. In the present paper, we propose to use randomization as a mean for achieving fairness. While this general idea is not new, it has not been applied in this area. Similar to previous works like Fish et al. (WWW ’19) and Tsang et al. (IJCAI ’19), we study the maximin criterion for (group) fairness. In contrast to their work however, we model the problem in such a way that, when choosing the seed sets, probabilistic strategies are possible rather than only deterministic ones. We introduce two different variants of this probabilistic problem, one that entails probabilistic strategies over nodes (node-based problem) and a second one that entails probabilistic strategies over sets of nodes (set-based problem). After analyzing the relation between the two probabilistic problems, we show that, while the original deterministic maximin problem was inapproximable, both probabilistic variants permit approximation algorithms that achieve a constant multiplicative factor of 1 − 1/ e minus an additive arbitrarily small error that is due to the simulation of the information spread. For the node-based problem, the approximation is achieved by observing that a polynomial-sized linear program approximates the problem well. For the set-based problem, we show that a multiplicative-weight routine can yield the approximation result. For an experimental study, we provide implementations of multiplicative-weight routines for both the set-based and the node-based problems and compare the achieved fairness values to existing methods. Maybe non-surprisingly, we show that the ex-ante values, i.e., minimum expected value of an individual (or group) to obtain the information, of the computed probabilistic strategies are significantly larger than the (ex-post) fairness values of previous methods. This indicates that studying fairness via randomization is a worthwhile path to follow. Interestingly and maybe more surprisingly, we observe that even the ex-post fairness values, i.e., fairness values of sets sampled according to the probabilistic strategies computed by our routines, dominate over the fairness achieved by previous methods on many of the instances tested.

AAAI Conference 2021 Conference Paper

Fairness in Influence Maximization through Randomization

  • Ruben Becker
  • Gianlorenzo D'Angelo
  • Sajjad Ghobadi
  • Hugo Gilbert

The influence maximization paradigm has been used by researchers in various fields in order to study how information spreads in social networks. While previously the attention was mostly on efficiency, more recently fairness issues have been taken into account in this scope. In the present paper, we propose to use randomization as a mean for achieving fairness. While this general idea is not new, it has not been applied in the area of information spread in networks. Similar to previous works like Fish et al. (WWW ’19) and Tsang et al. (IJCAI ’19), we study the maximin criterion for (group) fairness. By allowing randomized solutions, we introduce two different variants of this problem. While the original deterministic maximin problem has been shown to be inapproximable, interestingly, we show that both probabilistic variants permit approximation algorithms with a constant multiplicative factor of 1 − 1/e plus an additive arbitrarily small error due to the simulation of the information spread. For an experimental study, we provide implementations of our methods and compare the achieved fairness values to existing methods. Non-surprisingly, the ex-ante values, i. e. , minimum expected value of an individual (or group) to obtain the information, of the computed probabilistic strategies are significantly larger than the (ex-post) fairness values of previous methods. This confirms that studying fairness via randomization is a worthwhile direction. More surprisingly, we observe that even the ex-post fairness values, i. e. , fairness values of sets sampled according to the probabilistic strategies, computed by our routines dominate over the fairness achieved by previous methods on most of the instances tested.

AAMAS Conference 2021 Conference Paper

Maximizing Influence-Based Group Shapley Centrality

  • Ruben Becker
  • Gianlorenzo D'Angelo
  • Hugo Gilbert

A key problem in network analysis is the influence maximization problem, which consists of finding a set S of at most k seed users in a social network, such that the spread of information from S is maximized. We investigate the problem of choosing the best set of seeds when there exists an unknown pre-existing set of seed nodes. Our work extends the one of Chen and Teng (WWW’17) who introduced the so-called Shapley centrality of a node to measure the efficiency of nodes acting as seeds within a pre-existing but unknown set of seeds. We instead consider the question: Which set of cardinality k to target in this kind of scenario? The resulting optimization problem reveals very challenging, that is, assuming common computational complexity conjectures, we obtain strong hardness of approximation results. Nevertheless, we design a greedy algorithm which achieves an approximation factor of 1−1/e k − ϵ for any ϵ > 0, showing that not all is lost in settings wherek is bounded.

AAAI Conference 2020 Conference Paper

Balancing Spreads of Influence in a Social Network

  • Ruben Becker
  • Federico Corò
  • Gianlorenzo D'Angelo
  • Hugo Gilbert

The personalization of our news consumption on social media has a tendency to reinforce our pre-existing beliefs instead of balancing our opinions. To tackle this issue, Garimella et al. (NIPS’17) modeled the spread of these viewpoints, also called campaigns, using the independent cascade model introduced by Kempe, Kleinberg and Tardos (KDD’03) and studied an optimization problem that aims to balance information exposure when two opposing campaigns propagate in a network. This paper investigates a natural generalization of this optimization problem in which μ different campaigns propagate in the network and we aim to maximize the expected number of nodes that are reached by at least ν or none of the campaigns, where μ ≥ ν ≥ 2. Following Garimella et al. , despite this general setting, we also investigate a simplified one, in which campaigns propagate in a correlated manner. While for the simplified setting, we show that the problem can be approximated within a constant factor for any constant μ and ν, for the general setting, we give reductions leading to several approximation hardness results when ν ≥ 3. For instance, assuming the gap exponential time hypothesis to hold, we obtain that the problem cannot be approximated within a factor of n−g(n) for any g(n) = o(1) where n is the number of nodes in the network. We complement our hardness results with an Ω(n−1/2 )-approximation algorithm for the general setting when ν = 3 and μ is arbitrary.

SAT Conference 2017 Conference Paper

From DQBF to QBF by Dependency Elimination

  • Ralf Wimmer 0001
  • Andreas Karrenbauer
  • Ruben Becker
  • Christoph Scholl 0001
  • Bernd Becker 0001

Abstract In this paper, we propose the elimination of dependencies to convert a given dependency quantified Boolean formula (DQBF) to an equisatisfiable QBF. We show how to select a set of dependencies to eliminate such that we arrive at a smallest equisatisfiable QBF in terms of existential variables that is achievable using dependency elimination. This approach is improved by taking so-called don’t-care dependencies into account, which result from the application of dependency schemes to the formula and can be added to or removed from the formula at no cost. We have implemented this new method in the state-of-the-art DQBF solver HQS. Experiments show that dependency elimination is clearly superior to the previous method using variable elimination.