Arrow Research search
Back to NeurIPS

NeurIPS 1999

Monte Carlo POMDPs

Conference Paper Artificial Intelligence ยท Machine Learning

Abstract

We present a Monte Carlo algorithm for learning to act in partially observable Markov decision processes (POMDPs) with real-valued state and action spaces. Our approach uses importance sampling for representing beliefs, and Monte Carlo approximation for belief propagation. A reinforcement learning algorithm, value iteration, is employed to learn value functions over belief states. Finally, a sample(cid: 173) based version of nearest neighbor is used to generalize across states. Initial empirical results suggest that our approach works well in practical applications.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Annual Conference on Neural Information Processing Systems
Archive span
1987-2025
Indexed papers
30776
Paper id
149488760396137705