Arrow Research search
Back to AAAI

AAAI 2026

When Equal Isn’t Fair: Mitigating Over-Normalization in Large Language Models (Student Abstract)

Short Paper AAAI Student Abstract and Poster Program Artificial Intelligence

Abstract

Bias in Large Language Models (LLMs) is increasingly addressed through fairness-oriented techniques. However, in some cases, these approaches may inadvertently remove genuine cultural differences between groups, leading to “over-normalization” or models losing important socio-cultural distinctions. In this work, we introduce OverNormEval, a benchmark designed to detect when an LLM exhibits such over-normalization. We further explore the use of Direct Preference Optimization (DPO) to mitigate over-normalization.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
480911871432732734