Arrow Research search
Back to RLDM

RLDM 2013

Optimal Task Decomposition

Conference Abstract Accepted abstract Artificial Intelligence · Decision Making · Machine Learning · Reinforcement Learning

Abstract

Reinforcement learning has provided a rich framework for understanding the computational sub- strates underlying human decision making. Most work has so far has focused on simple decision problems with small state spaces. More recently researchers have begun applying ideas from hierarchical reinforce- ment learning, and the options framework in particular, to address how human decision making may scale. This framework specifies how the computational complexity associated with both learning and planning in high-dimensional state spaces may be reduced through the use of temporal abstraction. In addition to primitive actions that lead to transitions between adjacent states, the agent can execute options that lead to transitions between distant states. While there is now evidence that humans make use of options, it is unclear how they come to select which options are useful in the first place. We present option selection as a Bayesian model comparison problem and show that the options people select are those corresponding to the maximal model evidence.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Multidisciplinary Conference on Reinforcement Learning and Decision Making
Archive span
2013-2025
Indexed papers
1004
Paper id
870905461405103883