Arrow Research search

Author name cluster

Sven Schewe

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

40 papers
2 author rows

Possible papers

40

AAAI Conference 2026 Conference Paper

Good-for-MDP State Reduction for Stochastic LTL Planning

  • Christoph Weinhuber
  • Giuseppe De Giacomo
  • Yong Li
  • Sven Schewe
  • Qiyi Tang

We study stochastic planning problems in Markov Decision Processes (MDPs) with goals specified in Linear Temporal Logic (LTL). The state-of-the-art approach transforms LTL formulas into good-for-MDP (GFM) automata, which feature a restricted form of nondeterminism. These automata are then composed with the MDP, allowing the agent to resolve the nondeterminism during policy synthesis. A major factor affecting the scalability of this approach is the size of the generated automata. In this paper, we propose a novel GFM state-space reduction technique that significantly reduces the number of automata states. Our method employs a sophisticated chain of transformations, leveraging recent advances in good-for-games minimisation developed for adversarial settings. In addition to our theoretical contributions, we present empirical results demonstrating the practical effectiveness of our state-reduction technique. Furthermore, we introduce a direct construction method for formulas of the form GFφ, where φ is a co-safety formula. This construction is provably single-exponential in the worst case, in contrast to the general doubly-exponential complexity. Our experiments confirm the scalability advantages of this specialised construction.

ECAI Conference 2025 Conference Paper

Efficient Learning of Weak Deterministic Büchi Automata

  • Mona Alluwaym
  • Yong Li 0031
  • Sven Schewe
  • Qiyi Tang 0001

We present an efficient Angluin-style learning algorithm for weak deterministic Büchi automata (wDBAs). Different to ordinary deterministic Büchi and co-Büchi automata, wDBAs have a minimal normal form, and we show that we can learn this minimal normal form efficiently. We provide an improved result on the number of queries required and show on benchmarks that this theoretical advantage translates into significantly fewer queries: while previous approaches require a quintic number of queries, we only require quadratically many queries in the size of the canonic wDBA that recognises the target language.

GandALF Workshop 2025 Workshop Paper

Generalised Reachability Games Revisited

  • Sougata Bose
  • Daniel Hausmann
  • Soumyajit Paul
  • Sven Schewe
  • Tansholpan Zhanabekova

Classic reachability games on graphs are zero-sum games, where the goal of one player, Eve, is to visit a vertex from a given target set, and that of other player, Adam, is to prevent this. Generalised reachability games, studied by Fijalkow and Horn, are a generalisation of reachability objectives, where instead of a single target set, there is a family of target sets and Eve must visit all of them in any order. In this work, we further study the complexity of solving two-player games on graphs with generalised reachability objectives. Our results are twofold: first, we provide an improved complexity picture for generalised reachability games, expanding the known tractable class from games in which all target sets are singleton to additionally allowing a logarithmic number of target sets of arbitrary size. Second, we study optimisation variants of generalised reachability with a focus on the size of the target sets. For these problems, we show intractability for most interesting cases. Particularly, in contrast to the tractability in the classic variant for singleton target sets, the optimisation problem is NP-hard when Eve tries to maximise the number of singleton target sets that are visited. Tractability can be recovered in the optimisation setting when all target sets are singleton by requiring that Eve pledges a maximum sized subset of target sets that she can guarantee to visit.

IJCAI Conference 2025 Conference Paper

Solving MDPs with LTLf+ and PPLTL+ Temporal Objectives

  • Giuseppe De Giacomo
  • Yong Li
  • Sven Schewe
  • Christoph Weinhuber
  • Pian Yu

The temporal logics LTLf+ and PPLTL+ have recently been introduced to express objectives over infinite traces. These logics are appealing because they match the expressive power of LTL on infinite traces while enabling efficient DFA-based techniques, which have been crucial to the scalability of reactive synthesis and adversarial planning in LTLf and PPLTL over finite traces. In this paper, we demonstrate that these logics are also highly effective in the context of MDPs. Introducing a technique tailored for probabilistic systems, we leverage the benefits of efficient DFA-based methods and compositionality. This approach is simpler than its nonprobabilistic counterparts in reactive synthesis and adversarial planning, as it accommodates a controlled form of nondeterminism ("good for MDPs") in the automata when transitioning from finite to infinite traces. Notably, by exploiting compositionality, our solution is both implementation-friendly and well-suited for straightforward symbolic implementations.

IJCAI Conference 2024 Conference Paper

Angluin-Style Learning of Deterministic Büchi and Co-Büchi Automata

  • Yong Li
  • Sven Schewe
  • Qiyi Tang

While recently developed Angluin-style learning algorithms for omega-automata have much in common with her classic DFA learning algorithm, there is a huge difference in the cost of the equivalence queries about the target automata. For omega-regular languages, the target is to learn nondeterministic Buchi automata (NBAs) through the vehicle of Families of DFAs (FDFAs). While the cost of equivalence queries is usually idealised as constant in learning, it makes a practical difference that the language equivalence checking about the learned NBAs is computationally hard. We develop efficient techniques for the cases, where we learn deterministic Buchi automata (DBAs) or deterministic co-Buchi automata (DCAs). This is based on the observation that some classes of FDFAs can be used to learn DBAs for DBA recognisable languages, rather than having to resort to nondeterministic ones. We believe that the restriction to DBAs and DCAs in equivalence queries also makes our algorithm more appealing to realistic applications, as the operations are cheap---NL---for DBAs and DCAs.

ECAI Conference 2024 Conference Paper

Multi-Agent Reinforcement Learning for Alternating-Time Logic

  • Ernst Moritz Hahn
  • Mateo Perez
  • Sven Schewe
  • Fabio Somenzi
  • Ashutosh Trivedi 0001
  • Dominik Wojtczak

Alternating-time temporal logic (ATL) extends branching time logic by enabling quantification over paths that result from the strategic choices made by multiple agents in various coalitions within the system. While classical temporal logics express properties of “closed” systems, ATL can express properties of “open” systems resulting from interactions among several agents. Reinforcement learning (RL) is a sampling-based approach to decision-making where learning agents, guided by a scalar reward function, discover optimal policies through repeated interactions with the environment. The challenge of translating high-level objectives into scalar rewards for RL has garnered increased interest, particularly following the success of model-free RL algorithms. This paper presents an approach for deploying model-free RL to verify multi-agent systems against ATL specifications. The key contribution of this paper is a verification procedure for model-free RL of quantitative and non-nested classic ATL properties, based on Q-learning, demonstrated on a natural subclass of non-nested ATL formulas.

AAAI Conference 2024 Conference Paper

Omega-Regular Decision Processes

  • Ernst Moritz Hahn
  • Mateo Perez
  • Sven Schewe
  • Fabio Somenzi
  • Ashutosh Trivedi
  • Dominik Wojtczak

Regular decision processes (RDPs) are a subclass of non-Markovian decision processes where the transition and reward functions are guarded by some regular property of the past (a lookback). While RDPs enable intuitive and succinct representation of non-Markovian decision processes, their expressive power coincides with finite-state Markov decision processes (MDPs). We introduce omega-regular decision processes (ODPs) where the non-Markovian aspect of the transition and reward functions are extended to an omega-regular lookahead over the system evolution. Semantically, these lookaheads can be considered as promises made by the decision maker or the learning agent about her future behavior. In particular, we assume that, if the promised lookaheads are not met, then the payoff to the decision maker is falsum (least desirable payoff), overriding any rewards collected by the decision maker. We enable optimization and learning for ODPs under the discounted-reward objective by reducing them to lexicographic optimization and learning over finite MDPs. We present experimental results demonstrating the effectiveness of the proposed reduction.

GandALF Workshop 2023 Workshop Paper

An Objective Improvement Approach to Solving Discounted Payoff Games

  • Dell
  • Arthur Dumas
  • Sven Schewe

While discounted payoff games and classic games that reduce to them, like parity and mean-payoff games, are symmetric, their solutions are not. We have taken a fresh view on the constraints that optimal solutions need to satisfy, and devised a novel way to converge to them, which is entirely symmetric. It also challenges the gospel that methods for solving payoff games are either based on strategy improvement or on value iteration.

Highlights Conference 2023 Conference Abstract

Automata for Profit and Pleasure

  • Sven Schewe

What could be greater fun than toying around with formal structures? One particularly beautiful structure to play with are automata over infinite words, and there is really no need to give any supporting argument for the _pleasure_ part in the title. But how about profit? When using ω-regular languages as target languages for practical applications like Markov chain model checking, MDP model checking and reinforcement learning, reactive synthesis, or as a target for an infinite word played out in a two player game, the classic approach has been to first produce a deterministic automaton D that recognises that language. This deterministic automaton is quite useful: we can essentially play on the syntactic product of the structure and use the acceptance mechanism it inherits from the automaton as target. This is beautiful and moves all the heavy lifting to the required automata transformations. But when we want even more profit in addition to the pleasure, the question arises whether deterministic automata are the best we can do. They are clearly good enough: determinism is as restrictive as it gets, and easily guarantees that one can just work on the product. But what we really want is the reverse: we want an automaton, so that we can work on the product, and determinism is just maximally restrictive, and therefore good enough for everything. At Highlights, all will know that we can lift quite a few restrictions and instead turn to the gains we can make when we focus on the real needs of being able to work on the product. For Markov chains, this could be unambiguous automata, for MDPs this could be good-for-MDP automata, and for synthesis and games, it could be good-for-games automata. We will shed a light to a few nooks and corners of the vast room available open questions and answers, with a bias MDPs analysis in general and reinforcement learning in particular.

ECAI Conference 2023 Conference Paper

Omega-Regular Reward Machines

  • Ernst Moritz Hahn
  • Mateo Perez
  • Sven Schewe
  • Fabio Somenzi
  • Ashutosh Trivedi 0001
  • Dominik Wojtczak

Reinforcement learning (RL) is a powerful approach for training agents to perform tasks, but designing an appropriate reward mechanism is critical to its success. However, in many cases, the complexity of the learning objectives goes beyond the capabilities of the Markovian assumption, necessitating a more sophisticated reward mechanism. Reward machines and ω-regular languages are two formalisms used to express non-Markovian rewards for quantitative and qualitative objectives, respectively. This paper introduces ω-regular reward machines, which integrate reward machines with ω-regular languages to enable an expressive and effective reward mechanism for RL. We present a model-free RL algorithm to compute ε-optimal strategies against ω-regular reward machines and evaluate the effectiveness of the proposed algorithm through experiments.

IJCAI Conference 2022 Conference Paper

Hidden 1-Counter Markov Models and How to Learn Them

  • Mehmet Kurucan
  • Mete Özbaltan
  • Sven Schewe
  • Dominik Wojtczak

We introduce hidden 1-counter Markov models (H1MMs) as an attractive sweet spot between standard hidden Markov models (HMMs) and probabilistic context-free grammars (PCFGs). Both HMMs and PCFGs have a variety of applications, e. g. , speech recognition, anomaly detection, and bioinformatics. PCFGs are more expressive than HMMs, e. g. , they are more suited for studying protein folding or natural language processing. However, they suffer from slow parameter fitting, which is cubic in the observation sequence length. The same process for HMMs is just linear using the well-known forward-backward algorithm. We argue that by adding to each state of an HMM an integer counter, e. g. , representing the number of clients waiting in a queue, brings its expressivity closer to PCFGs. At the same time, we show that parameter fitting for such a model is computationally inexpensive: it is bi-linear in the length of the observation sequence and the maximal counter value, which grows slower than the observation length. The resulting model of H1MMs allows us to combine the best of both worlds: more expressivity with faster parameter fitting.

NeurIPS Conference 2022 Conference Paper

Recursive Reinforcement Learning

  • Ernst Moritz Hahn
  • Mateo Perez
  • Sven Schewe
  • Fabio Somenzi
  • Ashutosh Trivedi
  • Dominik Wojtczak

Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner's ingenuity in designing a suitable "flat" representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning---a model-free RL algorithm for RMDPs---and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions.

TMLR Journal 2022 Journal Article

Weight Expansion: A New Perspective on Dropout and Generalization

  • Gaojie Jin
  • Xinping Yi
  • Pengfei Yang
  • Lijun Zhang
  • Sven Schewe
  • Xiaowei Huang

While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking. We introduce the concept of weight expansion, an increase in the signed volume of a parallelotope spanned by the column or row vectors of the weight covariance matrix, and show that weight expansion is an effective means of increasing the generalization in a PAC-Bayesian setting. We provide a theoretical argument that dropout leads to weight expansion and extensive empirical support for the correlation between dropout and weight expansion. To support our hypothesis that weight expansion can be regarded as an indicator of the enhanced generalization capability endowed by dropout, and not just as a mere by-product, we have studied other methods that achieve weight expansion (resp.\ contraction), and found that they generally lead to an increased (resp.\ decreased) generalization ability. This suggests that dropout is an attractive regularizer, because it is a computationally cheap method for obtaining weight expansion. This insight justifies the role of dropout as a regularizer, while paving the way for identifying regularizers that promise improved generalization through weight expansion.

NeurIPS Conference 2020 Conference Paper

How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?

  • Gaojie Jin
  • Xinping Yi
  • Liang Zhang
  • Lijun Zhang
  • Sven Schewe
  • Xiaowei Huang

This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability. For fully-connected layers, the weight correlation is defined as the average cosine similarity between weight vectors of neurons, and for convolutional layers, the weight correlation is defined as the cosine similarity between filter matrices. Theoretically, we show that, weight correlation can, and should, be incorporated into the PAC Bayesian framework for the generalisation of neural networks, and the resulting generalisation bound is monotonic with respect to the weight correlation. We formulate a new complexity measure, which lifts the PAC Bayes measure with weight correlation, and experimentally confirm that it is able to rank the generalisation errors of a set of networks more precisely than existing measures. More importantly, we develop a new regulariser for training, and provide extensive experiments that show that the generalisation error can be greatly reduced with our novel approach.

MFCS Conference 2018 Conference Paper

Maximum Rooted Connected Expansion

  • Ioannis Lamprou 0001
  • Russell A. Martin
  • Sven Schewe
  • Ioannis Sigalas
  • Vassilis Zissimopoulos

Prefetching constitutes a valuable tool toward the goal of efficient Web surfing. As a result, estimating the amount of resources that need to be preloaded during a surfer's browsing becomes an important task. In this regard, prefetching can be modeled as a two-player combinatorial game [Fomin et al. , Theoretical Computer Science 2014], where a surfer and a marker alternately play on a given graph (representing the Web graph). During its turn, the marker chooses a set of k nodes to mark (prefetch), whereas the surfer, represented as a token resting on graph nodes, moves to a neighboring node (Web resource). The surfer's objective is to reach an unmarked node before all nodes become marked and the marker wins. Intuitively, since the surfer is step-by-step traversing a subset of nodes in the Web graph, a satisfactory prefetching procedure would load in cache (without any delay) all resources lying in the neighborhood of this growing subset. Motivated by the above, we consider the following maximization problem to which we refer to as the Maximum Rooted Connected Expansion (MRCE) problem. Given a graph G and a root node v_0, we wish to find a subset of vertices S such that S is connected, S contains v_0 and the ratio |N[S]|/|S| is maximized, where N[S] denotes the closed neighborhood of S, that is, N[S] contains all nodes in S and all nodes with at least one neighbor in S. We prove that the problem is NP-hard even when the input graph G is restricted to be a split graph. On the positive side, we demonstrate a polynomial time approximation scheme for split graphs. Furthermore, we present a 1/6(1-1/e)-approximation algorithm for general graphs based on techniques for the Budgeted Connected Domination problem [Khuller et al. , SODA 2014]. Finally, we provide a polynomial-time algorithm for the special case of interval graphs. Our algorithm returns an optimal solution for MRCE in O(n^3) time, where n is the number of nodes in G.

CSL Conference 2018 Conference Paper

Parity Games with Weights

  • Sven Schewe
  • Alexander Weinert
  • Martin Zimmermann 0002

Quantitative extensions of parity games have recently attracted significant interest. These extensions include parity games with energy and payoff conditions as well as finitary parity games and their generalization to parity games with costs. Finitary parity games enjoy a special status among these extensions, as they offer a native combination of the qualitative and quantitative aspects in infinite games: the quantitative aspect of finitary parity games is a quality measure for the qualitative aspect, as it measures the limit superior of the time it takes to answer an odd color by a larger even one. Finitary parity games have been extended to parity games with costs, where each transition is labelled with a non-negative weight that reflects the costs incurred by taking it. We lift this restriction and consider parity games with costs with arbitrary integer weights. We show that solving such games is in NP cap co-NP, the signature complexity for games of this type. We also show that the protagonist has finite-state winning strategies, and provide tight exponential bounds for the memory he needs to win the game. Naturally, the antagonist may need infinite memory to win. Finally, we present tight bounds on the quality of winning strategies for the protagonist.

Highlights Conference 2017 Conference Abstract

Synthesising Strategy Improvement and Recursive Algorithms for Solving 2. 5 Player Parity Games

  • Sven Schewe

2. 5 player parity games combine the challenges posed by 2. 5 player reachability games and the qualitative analysis of parity games. These two types of problems are best approached with different types of algorithms: strategy improvement algorithms for 2. 5 player reachability games and recursive algorithms for the qualitative analysis of parity games. We present a method that—in contrast to existing techniques—tackles both aspects with the best suited approach and works exclusively on the 2. 5 player game itself. The resulting technique is powerful enough to handle games with millions of states.

Highlights Conference 2016 Conference Abstract

A Simple Algorithm for Solving Qualitative Probabilistic Parity Games

  • Sven Schewe

In this talk, we develop an approach to find strategies that guarantee a property in systems that contain controllable, uncontrollable, and random vertices, resulting in probabilistic games. Such games are a reasonable abstraction of systems that comprise partial control over the system (reflected by controllable transitions), hostile nondeterminism (abstraction of the unknown, such as the behaviour of an attacker or a potentially hostile environment), and probabilistic transitions for the abstraction of unknown behaviour neutral to our goals. We exploit a simple and only mildly adjusted algorithm from the analysis of nonprobabilistic systems, and use it to show that the qualitative analysis of probabilistic games inherits the much celebrated sub-exponential complexity from 2-player games. The simple structure of the exploited algorithm allows us to offer tool support for finding the desired strategy, if it exists, for the given systems and properties. Our experimental evaluation shows that our technique is powerful enough to construct simple strategies that guarantee the specified probabilistic temporal properties. The talk is based on the CAV 2016 paper with the same title. It is joint work with Ernst Moritz Hahn, Andrea Turrini, and Lijun Zhang. The amazing thing about this approach is its simplicity and beauty. Instead of having a gadget based reduction to involved techniques, we use a straight-forward adaptation of a simple technique — and it turns out that 2. 5 player games are not that hard to solve.

Highlights Conference 2016 Conference Abstract

Complementing Semi-Deterministic Büchi Automata

  • Joint with Matthias Heizmann
  • Sven Schewe
  • Jan Strejček
  • Ming-Hsien Tsai.

We introduce an efficient complementation technique for semi-deterministic Büchi automata, which are Büchi automata that are deterministic in the limit: from every accepting state onward, their behaviour is deterministic. It is interesting to study semi-deterministic automata, because they play a role in practical applications of automata theory, such as the analysis of Markov decision processes. Our motivation to study their complementation comes from the termination analysis implemented in Ultimate Büchi Automizer, where these automata represent checked runs and have to be complemented to identify runs to be checked. We show that semi-determinism leads to a simpler complementation procedure. Our complementation is based on an extended breakpoint construction that allows for symbolic implementation. It also leads to significantly improved bounds as the complement of a semi-deterministic automaton with states has less than states. Moreover, the resulting automaton is unambiguous, which again offers new applications, like the analysis of Markov chains. We also motivate and present an on-the-fly modification of our complementation, which does not need to know the whole automaton before the complementation starts, which is a crucial property for Ultimate Büchi Automizer that uses implicitly encoded automata. The price for the on-the-fly approach is a slightly worse upper bound on the size of the produced automaton for the complement: it has less than 5^n states. We have evaluated our construction against the semi-deterministic automata produced by Ultimate Büchi Automizer applied to programs of the Termination category of the software verification competition SV-COMP 2015. We complemented these automata by our algorithm and by four standard complementation constructions and compare the resulting automata. The evaluation confirms that our algorithm outperforms the known complementation techniques that work for general nondeterministic Büchi automata. The presentation is based on the identically titled paper published at TACAS 2016.

TIME Conference 2016 Conference Paper

Optimal Control for Simple Linear Hybrid Systems

  • Mahmoud A. A. Mousa
  • Sven Schewe
  • Dominik Wojtczak

This paper studies optimal time-bounded control in a simple subclass of linear hybrid systems, which consists of one continuous variable and global constraints. Each state has a continuous cost attached to it, which is linear in the sojourn time, while a discrete cost is attached to each transition taken. We show the corresponding decision problem to be NP-complete and develop an FPTAS for finding an approximate solution. We have implemented a small prototype to compare the performance of these approximate and precise algorithms for this problem. Our results indicate that the proposed approximation schemes scale. Furthermore, we show that the same problem with infinite time horizon is in LOGSPACE.

GandALF Workshop 2015 Workshop Paper

Making the Best of Limited Memory in Multi-Player Discounted Sum Games

  • Anshul Gupta
  • Sven Schewe
  • Dominik Wojtczak

In this paper, we establish the existence of optimal bounded memory strategy profiles in multi-player discounted sum games. We introduce a non-deterministic approach to compute optimal strategy profiles with bounded memory. Our approach can be used to obtain optimal rewards in a setting where a powerful player selects the strategies of all players for Nash and leader equilibria, where in leader equilibria the Nash condition is waived for the strategy of this powerful player. The resulting strategy profiles are optimal for this player among all strategy profiles that respect the given memory bound, and the related decision problem is NP-complete. We also provide simple examples, which show that having more memory will improve the optimal strategy profile, and that sufficient memory to obtain optimal strategy profiles cannot be inferred from the structure of the game.

TIME Conference 2014 Conference Paper

Quantitative Verification in Rational Environments

  • Anshul Gupta
  • Sven Schewe

We study optimal equilibrium in turn based multiplayer mean-payoff games. Nash equilibrium are a standard way to define rational behaviour of different players in multi-player games. These equilibrium treat all players equally. We study settings where a leader has additional power over the game: she has the power to assign strategies to all participating players, including herself. We argue that a leader who assign the strategies, may not want to comply with the common restrictions imposed by Nash equilibrium. This setting provides the basis for the quantitative analysis of the distributed systems, where the leader can take the role of a controller or an adversary, while the other players form a rational environment. We show that the leader always has an optimal strategy in this setting, and that no Nash equilibrium can be superior to it. Finding this equilibrium is NP-complete and, for a fixed number of players, there is a polynomial time reduction to solving two player mean-payoff games.

MFCS Conference 2013 Conference Paper

Unlimited Decidability of Distributed Synthesis with Limited Missing Knowledge

  • Anca Muscholl
  • Sven Schewe

Abstract We study the problem of controller synthesis for distributed systems modelled by Zielonka automata. While the decidability of this problem in full generality remains open and challenging, we suggest here to seek controllers from a parametrised family: we are interested in controllers that ensure frequent communication between the processes, where frequency is determined by the parameter. We show that this restricted controller synthesis version is affordable for synthesis standards: fixing the parameter, the problem is Exptime -complete.

CSL Conference 2012 Conference Paper

Bounded Satisfiability for PCTL

  • Nathalie Bertrand 0001
  • John Fearnley
  • Sven Schewe

While model checking PCTL for Markov chains is decidable in polynomial-time, the decidability of PCTL satisfiability is a long standing open problem. While general satisfiability is an intriguing challenge from a purely theoretical point of view, we argue that general solutions would not be of interest to practitioners: such solutions could be too big to be implementable or even infinite. Inspired by bounded synthesis techniques, we turn to the more applied problem of seeking models of a bounded size: we restrict our search to implementable - and therefore reasonably simple - models. We propose a procedure to decide whether or not a given PCTL formula has an implementable model by reducing it to an SMT problem. We have implemented our techniques and found that they can be applied to the practical problem of sanity checking - a procedure that allows a system designer to check whether their formula has an unexpectedly small model.

GandALF Workshop 2012 Workshop Paper

Rapid Recovery for Systems with Scarce Faults

  • Chung-Hao Huang
  • Doron Peled
  • Sven Schewe
  • Farn Wang

Our goal is to achieve a high degree of fault tolerance through the control of a safety critical systems. This reduces to solving a game between a malicious environment that injects failures and a controller who tries to establish a correct behavior. We suggest a new control objective for such systems that offers a better balance between complexity and precision: we seek systems that are k-resilient. In order to be k-resilient, a system needs to be able to rapidly recover from a small number, up to k, of local faults infinitely many times, provided that blocks of up to k faults are separated by short recovery periods in which no fault occurs. k-resilience is a simple but powerful abstraction from the precise distribution of local faults, but much more refined than the traditional objective to maximize the number of local faults. We argue why we believe this to be the right level of abstraction for safety critical systems when local faults are few and far between. We show that the computational complexity of constructing optimal control with respect to resilience is low and demonstrate the feasibility through an implementation and experimental results.

TIME Conference 2011 Conference Paper

Synthesising Classic and Interval Temporal Logic

  • Sven Schewe
  • Cong Tian

Linear-Time Temporal Logic (LTL) is one of the most influential logics for the specification and verification of reactive systems. An important selling point of LTL is its striking simplicity, which might be a reason why none of the many extensions suggested to LTL have gained the same influence. Interval based temporal logics like Interval Temporal Logic (ITL) are a more recent branch of temporal logics with their own niche of interesting applications. On first glance, interval based temporal logics very little resemble LTL and the spread of these logics beyond their niche is hampered by a seeming structural incompatibility with LTL. When competing for being applied on a larger scale, interval based temporal logics would fight a losing battle against a more established competitor with better complexity and mature tools. In this paper, we suggest to extend ITL to Pop Logic (PL) by introducing a simple pop operator that revokes the binding of the chop operation-very much like the popping operation in a stack-and show that LTL can be viewed as a syntactic subset of PL. This is a surprising twist: by strengthening the comparably exotic logic ITL slightly and by using the new pop and the old chop operator as primitive constructs, we obtain a logic for which LTL is a de-facto syntactic fragment. The power of this extension is that it can, by subsuming both interval and classic temporal logics, synthesise both concepts to a common framework. The charm of this extension is that PL does not sacrifice the simplicity that makes its sub-logics attractive.

CSL Conference 2010 Conference Paper

Coordination Logic

  • Bernd Finkbeiner
  • Sven Schewe

Abstract We introduce Coordination Logic (CL), a new temporal logic that reasons about the interplay between behavior and informedness in distributed systems. CL provides a logical representation for the distributed realizability problem and extends the game-based temporal logics, including the alternating-time temporal logics, strategy logic, and game logic, with quantification over strategies under incomplete information. We show that the structure in CL that results from the nesting of the quantifiers is sufficient to guarantee the decidability of the logic and at the same time general enough to subsume and extend all previously known decidable cases of the distributed realizability problem.

MFCS Conference 2009 Conference Paper

From Parity and Payoff Games to Linear Programming

  • Sven Schewe

Abstract This paper establishes a surprising reduction from parity and mean payoff games to linear programming problems. While such a connection is trivial for solitary games, it is surprising for two player games, because the players have opposing objectives, whose natural translations into an optimisation problem are minimisation and maximisation, respectively. Our reduction to linear programming circumvents the need for concurrent minimisation and maximisation by replacing one of them, the maximisation, by approximation. The resulting optimisation problem can be translated to a linear programme by a simple space transformation, which is inexpensive in the unit cost model, but results in an exponential growth of the coefficients. The discovered connection opens up unexpected applications – like μ -calculus model checking – of linear programming in the unit cost model, and thus turns the intriguing academic problem of finding a polynomial time algorithm for linear programming in this model of computation (and subsequently a strongly polynomial algorithm) into a problem of paramount practical importance: All advancements in this area can immediately be applied to accelerate solving parity and payoff games, or to improve their complexity analysis.

CSL Conference 2008 Conference Paper

An Optimal Strategy Improvement Algorithm for Solving Parity and Payoff Games

  • Sven Schewe

Abstract This paper presents a novel strategy improvement algorithm for parity and payoff games, which is guaranteed to select, in each improvement step, an optimal combination of local strategy modifications. Current strategy improvement methods stepwise improve the strategy of one player with respect to some ranking function, using an algorithm with two distinct phases: They first choose a modification to the strategy of one player from a list of locally profitable changes, and subsequently evaluate the modified strategy. This separation is unfortunate, because current strategy improvement algorithms have no effective means to predict the global effect of the individual local modifications beyond classifying them as profitable, adversarial, or stale. Furthermore, they are completely blind towards the cross effect of different modifications: Applying one profitable modification may render all other profitable modifications adversarial. Our new construction overcomes the traditional separation between choosing and evaluating the modification to the strategy. It thus improves over current strategy improvement algorithms by providing the optimal improvement in every step, selecting the best combination of local updates from a superset of all profitable and stale changes.

CSL Conference 2006 Conference Paper

Satisfiability and Finite Model Property for the Alternating-Time mu -Calculus

  • Sven Schewe
  • Bernd Finkbeiner

Abstract This paper presents a decision procedure for the alternating-time μ -calculus. The algorithm is based on a representation of alternating-time formulas as automata over concurrent game structures. We show that language emptiness of these automata can be checked in exponential time. The complexity of our construction meets the known lower bounds for deciding the satisfiability of the classic μ -calculus. It follows that the satisfiability problem is EXPTIME-complete for the alternating-time μ -calculus.

LOPSTR Conference 2006 Conference Paper

Synthesis of Asynchronous Systems

  • Sven Schewe
  • Bernd Finkbeiner

Abstract This paper addresses the problem of synthesizing an asynchronous system from a temporal specification. We show that the cost of synthesizing a single-process implementation is the same for synchronous and asynchronous systems (2EXPTIME-complete for CTL* and EXPTIME-complete for the μ -calculus) if we assume a full scheduler (i. e. , a scheduler that allows every possible scheduling), and exponentially more expensive for asynchronous systems without this assumption (3EXPTIME-complete for CTL* and 2EXPTIME-complete for the μ -calculus). While multi-process synthesis for synchronous distributed systems is possible for certain architectures (like pipelines and rings), we show that the synthesis of asynchronous distributed systems is decidable if and only if at most one process implementation is unknown.