Arrow Research search
Back to EWRL

EWRL 2011

Options with Exceptions

Conference Paper Macro-actions in Reinforcement Learning Artificial Intelligence · Machine Learning · Reinforcement Learning

Abstract

Abstract An option is a policy fragment that represents a solution to a frequent subproblem encountered in a domain. Options may be treated as temporally extended actions thus allowing us to reuse that solution in solving larger problems. Often, it is hard to find subproblems that are exactly the same. These differences, however small, need to be accounted for in the reused policy. In this paper, the notion of options with exceptions is introduced to address such scenarios. This is inspired by the Ripple Down Rules approach used in data mining and knowledge representation communities. The goal is to develop an option representation so that small changes in the subproblem solutions can be accommodated without losing the original solution. We empirically validate the proposed framework on a simulated game domain.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
European Workshop on Reinforcement Learning
Archive span
2008-2025
Indexed papers
649
Paper id
277549385931588971