Arrow Research search
Back to ICML

ICML 2025

Selective Prompt Anchoring for Code Generation

Conference Paper Accept (poster) Artificial Intelligence ยท Machine Learning

Abstract

Recent advances in large language models (LLMs) have transformed software development by automatically generating code from natural language. Yet challenges remain in generating fully correct code that aligns with user intent. Our study reveals that LLMs tend to pay less attention to user prompts as more code tokens are generated. We hypothesize that this attention dilution issue is an important reason for code generation errors. To mitigate this issue, we propose S *elective P rompt A* nchoring (SPA) to guide code LLMs to pay more attention to user intent when generating code. We evaluate SPA using six base LLMs across six benchmarks. Our results demonstrate that SPA enhances Pass@1 by up to 12. 9%, consistently outperforming SOTA code generation methods in all settings. Our code is available at https: //github. com/magic-YuanTian/Selective-Prompt-Anchoring.

Authors

Keywords

  • Large Language Models (LLMs)
  • Prompt
  • Code Generation
  • Self-attention
  • Taylor expansion
  • Logits
  • Anchoring

Context

Venue
International Conference on Machine Learning
Archive span
1993-2025
Indexed papers
16471
Paper id
887481823667623721