Arrow Research search
Back to AAAI

AAAI 2025

Exploring and Mitigating Implicit Bias in Large Language Models: A Cross-Domain Evaluation Framework

Short Paper AAAI Undergraduate Consortium Artificial Intelligence

Abstract

This paper investigates implicit biases in large language models (LLMs) triggered by subtle contextual cues. Through experiments, the study examines how these biases influence model outputs in domains such as healthcare and hiring. A framework for mitigating stereotype reinforcement is proposed, along with strategies to refine prompts and reduce biased responses. The goal is to improve fairness in AI-driven applications by addressing these biases and enhancing model equity.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
571567509928478604