AAAI Conference 2026 Conference Paper
Automated Human Strategic Behavior Modeling via Large Language Models
- Xiaohan Xie
- Haoran Yu
- Biying Shou
- Jianwei Huang
What if machines could discover human behavioral patterns better than experts? Traditional behavioral modeling in economics depends on costly manual refinement by domain experts, severely limiting scalability and discovery potential. We introduce AutoBM, an automated behavioral modeling framework leveraging large language models (LLMs) to systematically generate, evaluate, and refine interpretable behavioral models directly from human behavior data. AutoBM represents candidate models as structured natural language specifications, explicitly defining symbolic terms along with their tunable parameters, interpretations, and design rationales. AutoBM leverages LLMs to automatically translate each language specification into executable code, optimize tunable parameters, and evaluate model performance. Utilizing LLM-guided search strategies, AutoBM iteratively recombines and improves models at the term level, closely mirroring human expert practices. Experiments conducted across three distinct strategic environments (the ultimatum game, repeated rock-paper-scissors, and continuous double auctions) demonstrate that AutoBM-generated models consistently outperform leading manually crafted models, achieving significant improvements in prediction accuracy while maintaining clear interpretability. Our results demonstrate that automated frameworks can not only match but systematically exceed human expertise in behavioral modeling, fundamentally changing how we understand strategic human behavior.