Arrow Research search
Back to AAAI

AAAI 2026

Towards Aligned and Efficient Large Language Models

Conference Paper New Faculty Highlights Artificial Intelligence

Abstract

Large language models (LLMs) have rapidly transformed the landscape of AI, demonstrating remarkable capabilities across reasoning, communication, and problem-solving. Yet, realizing their full potential requires addressing two critical challenges. First, their behavior must be steered and refined after training to ensure reliability, safety, and alignment with human values and intentions. Second, their large scale comes with substantial costs in training and deployment, necessitating research into more efficient methods. My research centers on advancing both of these fronts—making LLMs both aligned and efficient. On one side, I investigate post-training techniques that allow models to better reflect human preferences, demonstrate strong reasoning capabilities, and mitigate hallucination. On the other side, I study methods for improving data efficiency in training and inference efficiency in deployment. Together, these thrusts highlight a broader vision of enabling LLMs that are not only powerful, but also trustworthy and accessible at scale.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
97690727709383572