Arrow Research search
Back to AAAI

AAAI 2026

Scaling Up AI Alignment

Conference Paper Senior Member Presentation Artificial Intelligence

Abstract

From expert AI systems of the 1970s to self-supervised systems of the 2020s, the pendulum of AI development has swung from heavy reliance on human feedback to no or minimal reliance in the last 50 years. Self-supervised approaches have contributed significantly to the success and scalable development of AI. However, today we are at a tipping point where the future of AI, and whether so-ciety ends up benefiting from this technology in the long run, depends critically on the subsequent AI develop-ment aligning with human goals and values. Realizing this, there has been ramping up of efforts to align AI models with human expectations and values. Human feedback, however, remains limited and difficult to elicit. Thus, a key question lingers – how can we scale up alignment of AI systems with individual expectations and societal norms? This talk and paper provides an overview and perspective on efforts at answering this question.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
368951013823338478