Arrow Research search
Back to UAI

UAI 2012

Incentive Decision Processes

Conference Paper Accepted Paper Artificial Intelligence · Machine Learning · Uncertainty in Artificial Intelligence

Abstract

We consider Incentive Decision Processes, where a principal seeks to reduce its costs due to another agent’s behavior, by offering incentives to the agent for alternate behavior. We focus on the case where a principal interacts with a greedy agent whose preferences are hidden and static. Though IDPs can be directly modeled as partially observable Markov decision processes (POMDP), we show that it is possible to directly reduce or approximate the IDP as a polynomiallysized MDP: when this representation is approximate, we prove the resulting policy is boundedly-optimal for the original IDP. Our empirical simulations demonstrate the performance benefit of our algorithms over simpler approaches, and also demonstrate that our approximate representation results in a significantly faster algorithm whose performance is extremely close to the optimal policy for the original IDP.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Conference on Uncertainty in Artificial Intelligence
Archive span
1985-2025
Indexed papers
3717
Paper id
607785704536254426