AAAI 2022
On the Relation between Distributionally Robust Optimization and Data Curation (Student Abstract)
Abstract
Machine learning systems based on minimizing average error have been shown to perform inconsistently across important subsets of the data, and this defect is not exposed by a low average error for the entire dataset. In some social and economic applications, where data represent people, this can lead to discrimination of underrepresented gender, ethnic and other groups. Distributionally Robust Optimization (DRO) attempts to address this problem by minimizing the worst expected risk across subpopulations. We establish theoretical results that clarify the relation between DRO and the optimization of the same loss averaged on a weighted training dataset. A practical implication of our results is that neither DRO nor curation of the training set represent a complete solution for bias mitigation.
Authors
Keywords
No keywords are indexed for this paper.
Context
- Venue
- AAAI Conference on Artificial Intelligence
- Archive span
- 1980-2026
- Indexed papers
- 28718
- Paper id
- 97334403723907338