Arrow Research search
Back to TMLR

TMLR 2026

Auditing Predictive Models for Intersectional Biases

Journal Article Articles Artificial Intelligence ยท Machine Learning

Abstract

Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we propose Conditional Bias Scan (CBS), an auditing framework for detecting intersectional biases in the outputs of classification models that may lead to disparate impact. CBS aims to identify the subgroup with the most significant bias against the protected class, compared to the equivalent subgroup in the non-protected class. The framework can audit for predictive biases using common group fairness definitions that can be represented as conditional independence statements (separation and sufficiency) for both probabilistic and binarized predictions. We show through empirical evaluations that this methodology has substantially higher bias detection power compared to similar methods that audit for subgroup fairness. We then use this approach to detect statistically significant intersectional biases in the predictions of the COMPAS pre-trial risk assessment tool and a model trained on the German Credit data.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Transactions on Machine Learning Research
Archive span
2022-2026
Indexed papers
3849
Paper id
442118871634294187