Arrow Research search

Author name cluster

Ravada Satyadev

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

1 paper
1 author row

Possible papers

1

AAAI Conference 2026 Short Paper

When Equal Isn’t Fair: Mitigating Over-Normalization in Large Language Models (Student Abstract)

  • Ravada Satyadev
  • Aditya Ganesh Kumar
  • Avinash Anand
  • Rajiv Ratn Shah
  • Zhengkui Wang
  • Mukesh Prasad

Bias in Large Language Models (LLMs) is increasingly addressed through fairness-oriented techniques. However, in some cases, these approaches may inadvertently remove genuine cultural differences between groups, leading to “over-normalization” or models losing important socio-cultural distinctions. In this work, we introduce OverNormEval, a benchmark designed to detect when an LLM exhibits such over-normalization. We further explore the use of Direct Preference Optimization (DPO) to mitigate over-normalization.