Arrow Research search
Back to AAAI

AAAI 2026

Fairness Perceptions of Large Language Models

Conference Paper AAAI Technical Track on Philosophy and Ethics of AI Artificial Intelligence

Abstract

Large language models (LLMs) are increasingly used for decision-making tasks where fairness is an essential desideratum. But what does fairness even mean to an LLM? To investigate this, we conduct a comprehensive evaluation of how LLMs perceive fairness in the context of resource allocation, using both synthetic and real-world data. We find that several state-of-the-art LLMs, when instructed to be fair, tend to prioritize improving collective welfare rather than distributing benefits equally. Their perception of fairness is somewhat sensitive to how user preferences are represented, but less so to the real-world context of the decision-making task. Finally, we show that the best strategy for aligning an LLM's perception of fairness to a specific criterion is to provide it as a mathematical objective, without referencing "fairness", as this prevents the LLM from mixing the criterion with its own prior notions of fairness. Our results provide practical insights for understanding and shaping how LLMs interpret fairness in resource allocation problems.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
889296116738801490