AAAI Conference 2025 Short Paper
Exploring and Mitigating Implicit Bias in Large Language Models: A Cross-Domain Evaluation Framework
- Precious Donkor
This paper investigates implicit biases in large language models (LLMs) triggered by subtle contextual cues. Through experiments, the study examines how these biases influence model outputs in domains such as healthcare and hiring. A framework for mitigating stereotype reinforcement is proposed, along with strategies to refine prompts and reduce biased responses. The goal is to improve fairness in AI-driven applications by addressing these biases and enhancing model equity.