PRL 2024
Solving Minecraft Tasks via Model Learning
Abstract
Minecraft is a sandbox game that offers a rich and complex environment for AI research. Its design allows for defining diverse tasks and challenges for AI agents, such as gathering resources and crafting items. Previous works have applied both Reinforcement Learning (RL) and Automated Planning methods to accomplish different tasks in Minecraft. RL methods usually require a large number of interactions with the environment, while planning methods require a model of the domain to be available. Creating planning domain models for Minecraft tasks is arduous. Algorithms for learning a planning domain model from observations exist, yet they have mostly been used on planning benchmarks. In this work, we explore the use of such algorithms for solving Minecraft tasks. We propose an agent that learns domain models from observations—either generated by an expert or collected online—and uses them with an off-theshelf domain-independent planner. As a case study, we explore how such an agent can be used for the task of crafting a wooden pogo stick. Experimental results demonstrate the benefit of domain model learning and planning over standard RL-based methods.
Authors
Keywords
No keywords are indexed for this paper.
Context
- Venue
- Bridging the Gap Between AI Planning and Reinforcement Learning
- Archive span
- 2020-2025
- Indexed papers
- 151
- Paper id
- 193643852438486377