Arrow Research search

Author name cluster

Xavier Gitiaux

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

AAMAS Conference 2023 Conference Paper

Counterfactually Fair Dynamic Assignment: A Case Study on Policing

  • Tasfia Mashiat
  • Xavier Gitiaux
  • Huzefa Rangwala
  • Sanmay Das

Resource assignment algorithms for decision-making in dynamic environments have been shown to sometimes lead to negative impacts on individuals from minority populations. We propose a framework for algorithmic assignment of scarce resources in a dynamic setting that seeks to minimize concerns around unfairness and the potential for runaway feedback loops that create injustices. Our model estimates an underlying true latent confounder in a biased dataset, and makes allocation decisions based on a notion of fair intervention. We present evidence for the plausibility of our model by analyzing a novel dataset obtained from the City of Chicago through FOIA requests, and plan to release this dataset along with a visualization tool for use by various stakeholders. We also show that, in a simulated environment, our counterfactually fair policy can allocate limited resources near optimally, and better than baseline alternatives.

IJCAI Conference 2022 Conference Paper

SoFaiR: Single Shot Fair Representation Learning

  • Xavier Gitiaux
  • Huzefa Rangwala

To avoid discriminatory uses of their data, organizations can learn to map them into a representation that filters out information related to sensitive attributes. However, all existing methods in fair representation learning generate a fairness-information trade-off. To achieve different points on the fairness-information plane, one must train different models. In this paper, we first demonstrate that fairness-information trade-offs are fully characterized by rate-distortion trade-offs. Then, we use this key result and propose SoFaiR, a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane. Besides its computational saving, our single-shot approach is, to the extent of our knowledge, the first fair representation learning method that explains what information is affected by changes in the fairness / distortion properties of the representation. Empirically, we find on three datasets that SoFaiR achieves similar fairness information trade-offs as its multi-shot counterparts.

AAAI Conference 2021 Conference Paper

Fair Representations by Compression

  • Xavier Gitiaux
  • Huzefa Rangwala

Organizations that collect and sell data face increasing scrutiny for the discriminatory use of data. We propose a novel unsupervised approach to transform data into a compressed binary representation independent of sensitive attributes. We show that in an information bottleneck framework, a parsimonious representation should filter out information related to sensitive attributes if they are provided directly to the decoder. Empirical results show that the proposed method, FBC, achieves state-of-the-art accuracyfairness trade-off. Explicit control of the entropy of the representation bit stream allows the user to move smoothly and simultaneously along both rate-distortion and rate-fairness curves.

IJCAI Conference 2019 Conference Paper

mdfa: Multi-Differential Fairness Auditor for Black Box Classifiers

  • Xavier Gitiaux
  • Huzefa Rangwala

Machine learning algorithms are increasingly involved in sensitive decision-making processes with adversarial implications on individuals. This paper presents a new tool, mdfa that identifies the characteristics of the victims of a classifier's discrimination. We measure discrimination as a violation of multi-differential fairness. Multi-differential fairness is a guarantee that a black box classifier's outcomes do not leak information on the sensitive attributes of a small group of individuals. We reduce the problem of identifying worst-case violations to matching distributions and predicting where sensitive attributes and classifier's outcomes coincide. We apply mdfa to a recidivism risk assessment classifier widely used in the United States and demonstrate that for individuals with little criminal history, identified African-Americans are three-times more likely to be considered at high risk of violent recidivism than similar non-African-Americans.