Highlights 2023
MDPs as Distribution Transformers: Affine Invariant Synthesis for Safety Objectives
Abstract
Markov decision processes can be viewed as transformers of probability distributions. While this view is useful from a practical standpointto reason about trajectories of distributions, basic reachability and safetyproblems are known to be computationally intractable (i. e. , Skolem-hard)to solve in such models. Further, we show that even for simple examples ofMDPs, strategies for safety objectives over distributions can require infinitememory and randomization. In light of this, we present a novel overapproximation approach to synthesizestrategies in an MDP, such that a safety objective over the distributionsis met. More precisely, we develop a new framework for template-basedsynthesis of certificates as affine distributional and inductive invariants forsafety objectives in MDPs. We provide two algorithms within this framework. One can only synthesize memoryless strategies, but has relative completenessguarantees, while the other can synthesize general strategies. The runtimecomplexity of both algorithms is in PSPACE. We implement these algorithmsand show that they can solve several non-trivial examples. This is joint work with S. Akshay, Krishnendu Chatterjee and Tobias Meggendorfer that will be presented at CAV 2023. Contributed talk given by Đorđe Žikelić
Authors
Keywords
No keywords are indexed for this paper.
Context
- Venue
- Highlights of Logic, Games and Automata
- Archive span
- 2013-2025
- Indexed papers
- 1236
- Paper id
- 1074206671016571507