AIJ Journal 2026 Journal Article
Redefining ABA + Semantics via Abstract Set-To-Set Attacks
- Yannis Dimopoulos
- Wolfgang Dvořák
- Anna Rapberger
- Matthias König
- Markus Ulbricht
- Stefan Woltran
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
AIJ Journal 2026 Journal Article
JAIR Journal 2026 Journal Article
Fragments of propositional logic, i.e., tailored sub-languages designed for neatly structured data, are relevant in many practical settings. This paper studies belief update in fragments (e.g., Horn, Krom, affine) that obey a desirable semantic closure condition. We assume update is guided by the well-known Katsuno-Mendelzon (KM) postulates, which in full propositional logic characterize update operators as choice functions guided by total or partial preorders over possible worlds. Because many useful fragments cannot express every connective (e.g., they often lack closure under disjunction), the KM axioms must be rephrased and supplemented to keep updates rational in these less expressive environments. Our main result is a set of representation theorems: once the KM postulates are adjusted, they capture exactly the update operators generated by suitably constrained total or partial preorders within the fragment. In addition, we clarify how revision works in fragments when partial preorders are allowed and also present concrete, fragment-friendly update operators.
ECAI Conference 2025 Conference Paper
In the study of logic programming, notions of equivalence play a significant role. This is due to the fact that under common nonmonotonic semantics, like answer-set programming, two programs sharing the same models (answer sets) does not necessarily yield that they are equivalent in all contexts. Whether this context concerns other program modules or just different data, distinguishes strong from uniform equivalence. We introduce a new notion of equivalence for logic programs under the answer-set semantics that allows to precisely compare and simplify programs that receive input from different sources (i. e. , over different alphabets); a setting that previous equivalence notions have not considered, but has some interesting use cases, like data integration or belief merging. Our notion further generalizes relativized equivalence, where equivalence is only required over a parameterized context, and has the core concepts of strong and uniform equivalence as corner cases. We provide a model-theoretic characterization in the spirit of SE-models and establish some theoretical properties including a thorough complexity analysis. Furthermore, using our notion, we can pinpoint the known complexity gap between strong and uniform equivalence, giving insight into why the latter is harder than the former.
KR Conference 2025 Conference Paper
The grounding bottleneck in Answer Set Programming prohibits large instances from being solved. This is caused by a combinatorial explosion in the grounding phase of standard ground&solve systems. A promising alternative is Body-Decoupled Grounding (BDG), which grounds each body predicate on its own. However, BDG faces challenges in terms of worst-case grounding size and limited interoperability with other systems. This paper addresses shortcomings of BDG by introducing FastFound: an alternative foundedness check that significantly reduces grounding sizes, by grounding each predicate on its own. FastFound’s foundedness check is done implicitly, which leads to a quadratic reduction in grounding size. We start by introducing FastFound for tight normal rules, where we observe that this cannot be substantially improved. Then we extend FastFound to head-cycle-free programs and give novel interoperability results for full disjunctive programs. An experimental evaluation on our prototype shows promising results, as we solve more grounding-heavy tasks than both standard ground&solve systems and BDG.
NMR Workshop 2025 Conference Paper
Strong equivalence between knowledge bases ensures the possibility of replacing one with the other without affecting reasoning outcomes, in any given context. This makes it a crucial property in nonmonotonic formalisms. In particular, the fields of logic programming and abstract argumentation provide primary examples in which this property has been subject to vast investigations. However, while (classes of) logic programs and abstract argumentation frameworks are known to be semantically equivalent in static settings, this alignment breaks in dynamic contexts due to differing notions of update. As a result, strong equivalence does not always carry over from one formalism to the other. In this paper, we carefully investigate this discrepancy and introduce a new notion of strong equivalence for logic programs. Our approach preserves strong equivalence under translation between certain classes of logic programs and both Dung-style and claim-augmented argumentation frameworks, thus restoring compatibility across these formalisms.
AAAI Conference 2024 Conference Paper
Answer Set Programming (ASP) is a prominent rule-based language for knowledge representation and reasoning with roots in logic programming and non-monotonic reasoning. The aim to capture the essence of removing (ir)relevant details in ASP programs led to the investigation of different notions, from strong persistence (SP) forgetting, to faithful abstractions, and, recently, strong simplifications, where the latter two can be seen as relaxed and strengthened notions of forgetting, respectively. Although it was observed that these notions are related, especially given that they have characterizations through the semantics for strong equivalence, it remained unclear whether they can be brought together. In this work, we bridge this gap by introducing a novel relativized equivalence notion, which is a relaxation of the recent simplification notion, that is able to capture all related notions from the literature. We provide the necessary and sufficient conditions for relativized simplifiability, which shows that the challenging part is for when the context programs do not contain all the atoms to remove. We then introduce an operator that combines projection and a relaxation of SP-forgetting to obtain the relativized simplifications. We furthermore provide complexity results that complete the overall picture.
IJCAI Conference 2024 Conference Paper
Answer Set Programming (ASP) is a key paradigm for problems in artificial intelligence and industrial contexts. In ASP, problems are modeled via a set of rules. Over the time this paradigm grew into a rich language, enabling complex rule types like aggregate expressions. Most practical ASP systems follow a ground-and-solve pattern, where rule schemes are grounded and resulting rules are solved. There, the so-called grounding bottleneck may prevent from solving, due to sheer grounding sizes. Recently body-decoupled grounding (BDG) demonstrated how to reduce grounding sizes by delegating effort to solving. However, BDG provides limited interoperability with traditional grounders and only covers simple rule types. In this work, we establish hybrid grounding — based on a novel splitting theorem that allows us to freely combine BDG with traditional grounders. To mitigate huge groundings in practice, we define rewriting procedures for efficiently deferring grounding effort of aggregates to solving. Our experimental results indicate that this approach is competitive, especially for instances, where traditional grounding fails.
AAMAS Conference 2024 Conference Paper
Online discussion platforms are a vital part of the public discourse in a deliberative democracy. However, how to interpret the outcomes of the discussions on these platforms is often unclear. In this paper, we propose a novel and explainable method for selecting a set of most representative, consistent points of view by combining methods from computational social choice and abstract argumentation. Specifically, we model online discussions as abstract argumentation frameworks combined with information regarding which arguments voters approve of. Based on ideas from approval-based multiwinner voting, we introduce several voting rules for selecting a set of preferred extensions that represents voters’ points of view. We compare the proposed methods across several dimensions, theoretically and in numerical simulations, and give clear suggestions on which methods to use depending on the specific situation.
IJCAI Conference 2024 Conference Paper
Answer Set Programming (ASP) is a prominent problem-modeling and solving framework, whose solutions are called answer sets. Epistemic logic programs (ELP) extend ASP to reason about all or some answer sets. Solutions to an ELP can be seen as consequences over multiple collections of answer sets, known as world views. While the complexity of propositional programs is well studied, the non-ground case remains open. This paper establishes the complexity of non-ground ELPs. We provide a comprehensive picture for well-known program fragments, which turns out to be complete for the class NEXPTIME with access to oracles up to SigmaP2. In the quantitative setting, we establish complexity results for counting complexity beyond #EXP. To mitigate high complexity, we establish results in case of bounded predicate arity, reaching up to the fourth level of the polynomial hierarchy. Finally, we provide ETH-tight runtime results for the parameter treewidth, which has applications in quantitative reasoning, where we reason on (marginal) probabilities of epistemic literals.
JAIR Journal 2024 Journal Article
Argumentation frameworks (AFs) are a key formalism in AI research. Their semantics have been investigated in terms of principles, which define characteristic properties in order to deliver guidance for analyzing established and developing new semantics. Because of the simple structure of AFs, many desired properties hold almost trivially, at the same time hiding interesting concepts behind syntactic notions. We extend the principle-based approach to argumentation frameworks with collective attacks (SETAFs) and provide a comprehensive overview of common principles for their semantics. Our analysis shows that investigating principles based on decomposing the given SETAF (e.g. directionality or SCC-recursiveness) poses additional challenges in comparison to usual AFs. We introduce the notion of the reduct as well as the modularization principle for SETAFs which will prove beneficial for this kind of investigation. We then demonstrate how our findings can be utilized for incremental computation of extensions and show how we can use graph properties of the frameworks to speed up these algorithms.
AAAI Conference 2024 Conference Paper
Assumption-based argumentation (ABA) is a powerful defeasible reasoning formalism which is based on the interplay of assumptions, their contraries, and inference rules. ABA with preferences (ABA+) generalizes the basic model by allowing qualitative comparison between assumptions. The integration of preferences however comes with a cost. In ABA+, the evaluation under two central and well-established semantics---grounded and complete semantics---is not guaranteed to yield an outcome. Moreover, while ABA frameworks without preferences allow for a graph-based representation in Dung-style frameworks, an according instantiation for general ABA+ frameworks has not been established so far. In this work, we tackle both issues: First, we develop a novel abstract argumentation formalism based on set-to-set attacks. We show that our so-called Hyper Argumentation Frameworks (HYPAFs) capture ABA+. Second, we propose relaxed variants of complete and grounded semantics for HYPAFs that yield an extension for all frameworks by design, while still faithfully generalizing the established semantics of Dung-style Argumentation Frameworks. We exploit the newly established correspondence between ABA+ and HYPAFs to obtain variants for grounded and complete ABA+ semantics that are guaranteed to yield an outcome. Finally, we discuss basic properties and provide a complexity analysis. Along the way, we settle the computational complexity of several ABA+ semantics.
JAIR Journal 2024 Journal Article
In this paper, we study the effect of preferences in abstract argumentation under a claim-centric perspective. Recent work has revealed that semantical and computational properties can change when reasoning is performed on claim-level rather than on the argument-level, while under certain natural restrictions (arguments with the same claims have the same outgoing attacks) these properties are conserved. We now investigate these effects when, in addition, preferences have to be taken into account and consider four prominent reductions to handle preferences between arguments. As we shall see, these reductions give rise to four new classes of claim-augmented argumentation frameworks. These classes behave differently from each other with respect to semantic properties and computational complexity, but also in connection with structured argumentation formalisms such as assumption-based argumentation. This strengthens the view that the actual choice for handling preferences has to be taken with care.
AIJ Journal 2023 Journal Article
KR Conference 2023 Conference Paper
Simplification of logic programs under the answer-set semantics has been studied from the very beginning of the field. One natural simplification is the removal of atoms that are deemed irrelevant. While equivalence-preserving rewritings are well understood and incorporated in state-of-the-art systems, more careful rewritings in the realm of strong or uniform equivalence have received considerably less attention. This might be due to the fact that these equivalence notions rely on comparisons with respect to context programs that remain the same for both the original and the simplified program. In this work, we pursue the idea that the atoms considered irrelevant are disregarded accordingly in the context programs of the simplification, and propose novel equivalence notions for this purpose. We provide necessary and sufficient conditions for these kinds of simplifiability of programs, and show that such simplifications, if possible, can actually be achieved by just projecting the atoms from the programs themselves. We furthermore provide complexity results for the problems of deciding simplifiability and equivalence testing.
ICAPS Conference 2023 Conference Paper
Classical planning tasks are commonly described in a first-order language. However, most classical planners translate tasks by grounding them and then rewriting them into a propositional language. In recent years, the grounding step has become a larger bottleneck. In this work, we study how to improve it. We build on top of the most common grounder for planning tasks which uses Datalog to find all reachable atoms and actions. Inspired by recent progress in lifted planning, database theory, and algorithmics, we develop a new method to ground these Datalog programs. Our algorithm can ground more instances than the baseline, and most tasks it cannot ground are out of reach from any ground planner.
ECAI Conference 2023 Conference Paper
Answer Set Programming is widely applied research area for knowledge representation and for solving industrial domains. One of the challenges of this formalism focuses on the so-called grounding bottleneck, which addresses the efficient replacement of first-order variables by means of domain values. Recently, there have been several works in this direction, ranging from lazy grounding, hybrid solving, over translational approaches. Inspired by a translation from non-ground normal programs to ground disjunctive programs, we attack the grounding bottleneck from a more general angle. We provide a polynomial reduction for grounding disjunctive programs of bounded domain size by reducing to propositional epistemic logic programs (ELPs). By slightly adapting our reduction, we show new complexity results for non-ground programs that adhere to the measure treewidth. We complement these results by matching lower bounds under the exponential time hypothesis, ruling out significantly better algorithms.
NMR Workshop 2023 Conference Paper
In abstract argumentation, arguments jointly attacking single arguments is a well-understood concept, captured by the established notion of SETAFs—argumentation frameworks with collective attacks. In contrast, the idea of sets attacking other sets of arguments has not received much attention so far. In this work, we contribute to the development of set-to-set defeat in formal argumentation. To this end, we introduce so called hyper argumentation frameworks (HYPAFs), a new formalism that extends SETAFs by allowing for set-to-set attacks. We investigate this notion by interpreting these novel attacks in terms of universal, indeterministic, and collective defeat. We will see that universal defeat can be naturally captured by the already existing SETAFs. While this is not the case for indeterministic defeat, we show a close connection to attack-incomplete argumentation frameworks. To formalize our interpretation of collective defeat, we develop novel semantics yielding a natural generalization of attacks between arguments to set-to-set attacks. We investigate fundamental properties and identify several surprising obstacles; for instance, the well-known fundamental lemma is violated, and the grounded extension might not exist. Finally, we investigate the computational complexity of the thereby arising problems.
AIJ Journal 2023 Journal Article
AIJ Journal 2023 Journal Article
AAAI Conference 2023 Conference Paper
In this paper, we study the effect of preferences in abstract argumentation under a claim-centric perspective. Recent work has revealed that semantical and computational properties can change when reasoning is performed on claim-level rather than on the argument-level, while under certain natural restrictions (arguments with the same claims have the same outgoing attacks) these properties are conserved. We now investigate these effects when, in addition, preferences have to be taken into account and consider four prominent reductions to handle preferences between arguments. As we shall see, these reductions give rise to different classes of claim-augmented argumentation frameworks, and behave differently in terms of semantic properties and computational complexity. This strengthens the view that the actual choice for handling preferences has to be taken with care.
AIJ Journal 2022 Journal Article
IJCAI Conference 2022 Conference Paper
Answer-Set Programming (ASP) has seen tremendous progress over the last two decades and is nowadays successfully applied in many real-world domains. However, for certain types of problems, the well-known ASP grounding bottleneck still causes severe problems. This becomes virulent when grounding of rules, where the variables have to be replaced by constants, leads to a ground pro- gram that is too huge to be processed by the ASP solver. In this work, we tackle this problem by a novel method that decouples non-ground atoms in rules in order to delegate the evaluation of rule bodies to the solving process. Our procedure translates a non-ground normal program into a ground disjunctive program that is exponential only in the maximum predicate arity, and thus polynomial if this arity is assumed to be bounded by a constant. We demonstrate the feasibility of this new method experimentally by comparing it to standard ASP technology in terms of grounding size, grounding time and total runtime.
AIJ Journal 2022 Journal Article
JAIR Journal 2022 Journal Article
We study the computational complexity of abstract argumentation semantics based on weak admissibility, a recently introduced concept to deal with arguments of self-defeating nature. Our results reveal that semantics based on weak admissibility are of much higher complexity (under typical assumptions) compared to all argumentation semantics which have been analysed in terms of complexity so far. In fact, we show PSPACE-completeness of all non-trivial standard decision problems for weak-admissible based semantics. We then investigate potential tractable fragments and show that restricting the frameworks under consideration to certain graph-classes significantly reduces the complexity. We also show that weak-admissibility based extensions can be computed by dividing the given graph into its strongly connected components (SCCs). This technique ensures that the bottleneck when computing extensions is the size of the largest SCC instead of the size of the graph itself and therefore contributes to the search for fixed-parameter tractable implementations for reasoning with weak admissibility.
KR Conference 2022 Conference Paper
Argumentation Frameworks (AFs) are a key formalism in AI research. Their semantics have been investigated in terms of principles, which define characteristic properties in order to deliver guidance for analysing established and developing new semantics. Because of the simple structure of AFs, many desired properties hold almost trivially, at the same time hiding interesting concepts behind syntactic notions. We extend the principle-based approach to Argumentation Frameworks with Collective Attacks (SETAFs) and provide a comprehensive overview of common principles for their semantics. Our analysis shows that investigating principles based on decomposing the given SETAF (e. g. directionality or SCC-recursiveness) poses additional challenges in comparison to usual AFs. We introduce the notion of the reduct as well as the modularization principle for SETAFs which will prove beneficial for this kind of investigation. We then demonstrate how our findings can be utilized for incremental computation of extensions and give a novel parameterized tractability result for verifying preferred extensions.
NMR Workshop 2022 Conference Paper
In this paper, we study the effect of preferences in abstract argumentation under a claim-centric perspective. Recent work has revealed that semantical and computational properties can change when reasoning is performed on claim-level rather than on the argument-level, while under certain natural restrictions (arguments with the same claims have the same outgoing attacks) these properties are conserved. We now investigate these effects when, in addition, preferences have to be taken into account and consider four prominent reductions to handle preferences between arguments. As we shall see, these reductions give rise to different classes of claim-augmented argumentation frameworks, and behave differently in terms of semantic properties and computational complexity. This strengthens the view that the actual choice for handling preferences has to be taken with care.
AAAI Conference 2022 Conference Paper
Argumentation frameworks (AFs) are a core formalism in the field of formal argumentation. As most standard computational tasks regarding AFs are hard for the first or second level of the Polynomial Hierarchy, a variety of algorithmic approaches to achieve manageable runtimes have been considered in the past. Among them, the backdoor-approach and the treewidth-approach turned out to yield fixed-parameter tractable fragments. However, many applications yield high parameter values for these methods, often rendering them infeasible in practice. We introduce the backdoor-treewidth approach for abstract argumentation, combining the best of both worlds with a guaranteed parameter value that does not exceed the minimum of the backdoor- and treewidth-parameter. In particular, we formally define backdoor-treewidth and establish fixed-parameter tractability for standard reasoning tasks of abstract argumentation. Moreover, we provide systems to find and exploit backdoors of small width, and conduct systematic experiments evaluating the new parameter.
IJCAI Conference 2022 Conference Paper
Extending the popular Answer Set Programming (ASP) paradigm by introspective reasoning capacities has received increasing interest within the last years. Particular attention is given to the formalism of epistemic logic programs (ELPs) where standard rules are equipped with modal operators which allow to express conditions on literals for being known or possible, i. e. , contained in all or some answer sets, respectively. ELPs thus deliver multiple collections of answer sets, known as world views. Employing ELPs for reasoning problems so far has mainly been restricted to standard deci- sion problems (complexity analysis) and enumeration (development of systems) of world views. In this paper, we first establish quantitative reasoning for ELPs, where the acceptance of a certain set of literals depends on the number (proportion) of world views that are compatible with the set. Second, we present a novel system capable of efficiently solving the underlying counting problems required for quantitative reasoning. Our system exploits the graph-based measure treewidth by iteratively finding (graph) abstractions of ELPs.
IJCAI Conference 2021 Conference Paper
Qualitative Choice Logic (QCL) and Conjunctive Choice Logic (CCL) are formalisms for preference handling, with especially QCL being well established in the field of AI. So far, analyses of these logics need to be done on a case-by-case basis, albeit they share several common features. This calls for a more general choice logic framework, with QCL and CCL as well as some of their derivatives being particular instantiations. We provide such a framework, which allows us, on the one hand, to easily define new choice logics and, on the other hand, to examine properties of different choice logics in a uniform setting. In particular, we investigate strong equivalence, a core concept in non-classical logics for understanding formula simplification, and computational complexity. Our analysis also yields new results for QCL and CCL. For example, we show that the main reasoning task regarding preferred models is ϴ₂P-complete for QCL and CCL, while being Δ₂P-complete for a newly introduced choice logic.
JELIA Conference 2021 Conference Paper
Abstract Argumentation frameworks with collective attacks (SETAFs) have gained increasing attention in recent years as they provide a natural extension of the well-known abstract argumentation frameworks (AFs) due to Dung. Concerning complexity, it is known that for the standard reasoning tasks in abstract argumentation, SETAFs show the same behavior as AFs, i. e. they are mainly located on the first or second level of the polynomial hierarchy. However, while for AFs there is a rich literature on easier fragments, complexity analyses in this direction are still missing for SETAFs. In particular, the well-known graph-classes of acyclic AFs, even-cycle-free AFs, symmetric AFs, and bipartite AFs have been shown tractable. In this paper, we aim to extend these results to the more general notion of SETAFs. In particular, we provide various syntactic notions on SETAFs that naturally generalize the graph properties for directed hypergraphs, and perform a complexity analysis of the prominent credulous and skeptical acceptance problems for several different widely used semantics.
KR Conference 2021 Short Paper
Argumentation frameworks are a core formalism in the field of formal argumentation, with several semantics being proposed in the literature. Among them, preferred semantics is one of the most popular but comes with relatively high complexity. In fact, deciding whether an argument is skeptically accepted, i. e. contained in each preferred extension, is Pi^P_2-complete. In this work we study the complexity of this problem w. r. t. the length of the cycles in the considered AF. Our results show which bounds are necessary to decrease the complexity to coNP and P, respectively. We also consider argumentation frameworks with collective attacks and achieve Pi^P_2-hardness already for cycles of length 4.
AAAI Conference 2021 Conference Paper
We study the computational complexity of abstract argumentation semantics based on weak admissibility, a recently introduced concept to deal with arguments of self-defeating nature. Our results reveal that semantics based on weak admissibility are of much higher complexity (under typical assumptions) compared to all argumentation semantics which have been analysed in terms of complexity so far. In fact, we show PSPACE-completeness of all non-trivial standard decision problems for weak-admissible based semantics. We then investigate potential tractable fragments and show that restricting the frameworks under consideration to certain graphclasses significantly reduces the complexity. As a strategy for implementation we also provide a polynomial-time reduction to DATALOG with stratified negation.
AAAI Conference 2021 Conference Paper
Claim-augmented argumentation frameworks (CAFs) provide a formal basis to analyze conclusion-oriented problems in argumentation by adapting a claim-focused perspective; they extend Dung AFs by associating a claim to each argument representing its conclusion. This additional layer offers various possibilities to generalize abstract argumentation semantics, i. e. the re-interpretation of arguments in terms of their claims can be performed at different stages in the evaluation of the framework: One approach is to perform the evaluation entirely at argument-level before interpreting arguments by their claims (inherited semantics); alternatively, one can perform certain steps in the process (e. g. , maximization) already in terms of the arguments’ claims (claim-level semantics). The inherent difference of these approaches not only potentially results in different outcomes but, as we will show in this paper, is also mirrored in terms of computational complexity. To this end, we provide a comprehensive complexity analysis of the four main reasoning problems with respect to claim-level variants of preferred, naive, stable, semi-stable and stage semantics and complete the complexity results of inherited semantics by providing corresponding results for semi-stable and stage semantics. Moreover, we show that deciding, whether for a given framework the two approaches of a semantics coincide (concurrence), can be surprisingly hard, ranging up to the third level of the polynomial hierarchy.
KR Conference 2020 Conference Paper
Claim-augmented argumentation frameworks (CAFs) constitute a generic formalism for conflict resolution of conclusion-oriented problems in argumentation. CAFs extend Dung argumentation frameworks (AFs) by assigning a claim to each argument. So far, semantics for CAFs are defined with respect to the underlying AF by interpreting the extensions of the respective AF semantics in terms of the claims of the accepted arguments; we refer to them as inherited semantics of CAFs. A central concept of many argumentation semantics is maximization, which can be done with respect to arguments as in preferred semantics, or with respect to the range as in semi-stable semantics. However, common instantiations of argumentation frameworks require maximality on the claim-level and inherited semantics often fail to provide maximal claim-sets even if the underlying AF semantics yields maximal argument sets. To address this issue, we investigate a different approach and introduce claim-level semantics (cl-semantics) for CAFs where maximization is performed on the claim-level. We compare these two approaches for five prominent semantics (preferred, naive, stable, semi-stable, and stage) and relate in total eleven CAF semantics to each other. Moreover, we show that for a certain subclass of CAFs, namely well-formed CAFs, the different versions of preferred and stable semantics coincide, which is not the case for the remaining semantics. We furthermore investigate a recently established translation between well-formed CAFs and SETAFs and show that, in contrast to the inherited naive, semi-stable and stage semantics, the cl-semantics correspond to the respective SETAF semantics. Finally, we investigate the expressiveness of the considered semantics in terms of their signatures.
AIJ Journal 2020 Journal Article
AIJ Journal 2020 Journal Article
ECAI Conference 2020 Conference Paper
Argumentation frameworks (AFs) provide a central approach to perform reasoning in many formalisms within argumentation in Artificial Intelligence (AI). Semantics for AFs specify criteria under which sets of arguments can be deemed acceptable, with the notion of admissibility being at the core of main semantics for AFs. A fundamental reasoning task is to find an admissible set containing a queried argument, called credulous acceptance under admissibility. While such a set explains how to argue in favour of a queried argument, finding an explanation in the negative case, i. e. , answering why a queried argument is not credulously accepted under admissibility, is less immediate. In this paper, we approach this problem by considering subframeworks of a given AF as witnesses for non-acceptability. Due to the non-monotonicity of semantics for AFs, this requires that every expansion of the witnessing subframework must preserve non-acceptance of the argument—otherwise the subframework would not give sufficient reason for rejection. Among our main contributions (i) we show that this notion of witnessing subframeworks is connected to strong admissibility of AFs, (ii) we investigate the complexity of finding small such subframeworks, and (iii) we extend a recently proposed framework for abstraction in the declarative answer set programming paradigm in order to compute rejecting subframeworks. The resulting system is thus able to deliver explanations also in the case of non-acceptance and we provide a first empirical study that shows the feasibility of our approach.
AIJ Journal 2020 Journal Article
ECAI Conference 2020 Conference Paper
Dung’s abstract argumentation frameworks (AFs) are a popular conceptual tool to define semantics for advanced argumentation formalisms. Hereby, arguments representing a possible inference of a claim are constructed and an attack relation between arguments indicates certain conflicts between the claim of one argument and the inference of another. Based on this abstract model, sets of jointly acceptable arguments are then gathered and finally interpreted in terms of their claims. Argumentation formalisms following this type of instantiating Dung AFs naturally produce several arguments with the same claim. This causes several issues and challenges for argumentation systems: on the one hand, the relation between claims remains implicit and, on the other hand, determining the acceptance of claims requires additional computations on top of argument acceptance. An instantiation that avoids this situation could provide additional insights and advantages, thus complementing the standard instantiation process via Dung AFs. Consequently, the research question we tackle is as follows: Can one combine different arguments sharing the same claim to a single abstract argument without affecting the overall results (and which abstract formalisms can serve such a purpose)? As a main result we show that a certain class of frameworks, where arguments with the same claim have the same outgoing attacks, can be equivalently (for all standard semantics) represented as argumentation frameworks with collective attacks where each claim occurs in exactly one argument. We further identify a class of frameworks where one even obtains an equivalent Dung AF with just one argument per claim.
AAAI Conference 2020 Conference Paper
Epistemic logic programs (ELPs) are a popular generalization of standard Answer Set Programming (ASP) providing means for reasoning over answer sets within the language. This richer formalism comes at the price of higher computational complexity reaching up to the fourth level of the polynomial hierarchy. However, in contrast to standard ASP, dedicated investigations towards tractability have not been undertaken yet. In this paper, we give first results in this direction and show that central ELP problems can be solved in linear time for ELPs exhibiting structural properties in terms of bounded treewidth. We also provide a full dynamic programming algorithm that adheres to these bounds. Finally, we show that applying treewidth to a novel dependency structure—given in terms of epistemic literals—allows to bound the number of ASP solver calls in typical ELP solving procedures.
SAT Conference 2020 Conference Paper
Abstract Treewidth is one of the most prominent structural parameters. While numerous theoretical results establish tractability under the assumption of fixed treewidth, the practical success of exploiting this parameter is far behind what theoretical runtime bounds have promised. In particular, a naive application of dynamic programming (DP) on tree decompositions (TDs) suffers already from instances of medium width. In this paper, we present several measures to advance this paradigm towards general applicability in practice: We present nested DP, where different levels of abstractions are used to (recursively) compute TDs of a given instance. Further, we integrate the concept of hybrid solving, where subproblems hidden by the abstraction are solved by classical search-based solvers, which leads to an interleaving of parameterized and classical solving. Finally, we provide nested DP algorithms and implementations relying on database technology for variants and extensions of Boolean satisfiability. Experiments indicate that the advancements are promising.
JAIR Journal 2020 Journal Article
In this paper, we aim to study how the performance of modern answer set programming (ASP) solvers is influenced by the treewidth of the input program and to investigate the consequences of this relationship. We first perform an experimental evaluation that shows that the solving performance is heavily influenced by treewidth, given ground input programs that are otherwise uniform, both in size and construction. This observation leads to an important question for ASP, namely, how to design encodings such that the treewidth of the resulting ground program remains small. To this end, we study two classes of disjunctive programs, namely guarded and connection-guarded programs. In order to investigate these classes, we formalize the grounding process using MSO transductions. Our main results show that both classes guarantee that the treewidth of the program after grounding only depends on the treewidth (and the maximum degree, in case of connection-guarded programs) of the input instance. In terms of parameterized complexity, our findings yield corresponding FPT results for answer-set existence for bounded treewidth (and also degree, for connection-guarded programs) of the input instance. We further show that bounding treewidth alone leads to NP-hardness in the data complexity for connection-guarded programs, which indicates that the two classes are fundamentally different. Finally, we show that for both classes, the data complexity remains as hard as in the general case of ASP.
AIJ Journal 2019 Journal Article
IJCAI Conference 2019 Conference Paper
Classical axiomatizations of belief revision include a postulate stating that if new information is consistent with initial beliefs, then revision amounts to simply adding the new information to the original knowledge base. This postulate assumes a conservative attitude towards initial beliefs, in the sense that an agent faced with the need to revise them will seek to preserve initial beliefs as much as possible. In this work we look at operators that can assume different attitudes towards original beliefs. We provide axiomatizations of these operators by varying the aforementioned postulate and obtain representation results that characterize the new types of operators using preorders on possible worlds. We also present concrete examples for each new type of operator, adapting notions from decision theory.
AAAI Conference 2019 Conference Paper
Abstract argumentation frameworks have been introduced by Dung as part of an argumentation process, where arguments and conflicts are derived from a given knowledge base. It is solely this relation between arguments that is then used in order to identify acceptable sets of arguments. A final step concerns the acceptance status of particular statements by reviewing the actual contents of the acceptable arguments. Complexity analysis of abstract argumentation so far has neglected this final step and is concerned with argument names instead of their contents, i. e. their claims. As we outline in this paper, this is not only a slight deviation but can lead to different complexity results. We, therefore, give a comprehensive complexity analysis of abstract argumentation under a claim-centric view and analyse the four main decision problems under seven popular semantics. In addition, we also address the complexity of common sub-classes and introduce novel parameterisations – which exploit the nature of claims explicitly – along with fixed-parameter tractability results.
AAAI Conference 2019 Conference Paper
Modular programming facilitates the creation and reuse of large software, and has recently gathered considerable interest in the context of Answer Set Programming (ASP). In this setting, forgetting, or the elimination of middle variables no longer deemed relevant, is of importance as it allows one to, e. g. , simplify a program, make it more declarative, or even hide some of its parts without affecting the consequences for those parts that are relevant. While forgetting in the context of ASP has been extensively studied, its known limitations make it unsuitable to be used in Modular ASP. In this paper, we present a novel class of forgetting operators and show that such operators can always be successfully applied in Modular ASP to forget all kinds of atoms – input, output and hidden – overcoming the impossibility results that exist for general ASP. Additionally, we investigate conditions under which this class of operators preserves the module theorem in Modular ASP, thus ensuring that answer sets of modules can still be composed, and how the module theorem can always be preserved if we further allow the reconfiguration of modules.
JELIA Conference 2019 Conference Paper
Abstract Abstract dialectical frameworks (ADFs) are generalizations of Dung’s argumentation frameworks which allow arbitrary relationships among arguments to be expressed. In particular, arguments can not only attack each other, they also may provide support for other arguments and interact in various complex ways. The ADF approach has recently been extended in two different ways. On the one hand, GRAPPA is a framework that applies the key notions underlying ADFs – in particular their operator-based semantics – directly to arbitrary labelled graphs. This allows users to represent argumentation scenarios in their favourite graphical representations without giving up the firm ground of well-defined semantics. On the other hand, ADFs have been further generalized to the multi-valued case to enable fine-grained acceptance values. In this paper we unify these approaches and develop a multi-valued version of GRAPPA combining the advantages of both extensions.
JAIR Journal 2019 Journal Article
Lifting a preference order on elements of some universe to a preference order on subsets of this universe is often guided by postulated properties the lifted order should have. Well-known impossibility results pose severe limits on when such liftings exist if all non-empty subsets of the universe are to be ordered. The extent to which these negative results carry over to other families of sets is not known. In this paper, we consider families of sets that induce connected subgraphs in graphs. For such families, common in applications, we study whether lifted orders satisfying the well-studied axioms of dominance and (strict) independence exist for every or, in another setting, for some underlying order on elements (strong and weak orderability). We characterize families that are strongly and weakly orderable under dominance and strict independence, and obtain a tight bound on the class of families that are strongly orderable under dominance and independence.
JELIA Conference 2019 Conference Paper
Abstract A fast-growing research direction in the study of formal argumentation is the development of practical systems for central reasoning problems underlying argumentation. In particular, numerous systems for abstract argumentation frameworks (AF solvers) are available today, covering several argumentation semantics and reasoning tasks. Instead of proposing another algorithmic approach for AF solving, we introduce in this paper distinct AF preprocessing techniques as a solver-independent approach to obtaining performance improvements of AF solvers. We establish a formal framework of replacement patterns to perform local simplifications that are faithful with respect to standard semantics for AFs. Moreover, we provide a collection of concrete replacement patterns. Towards potential applicability, we employ the patterns in a preliminary empirical evaluation of their influence on AF solver performance.
AAAI Conference 2019 Conference Paper
Epistemic Logic Programs (ELPs), that is, Answer Set Programming (ASP) extended with epistemic operators, have received renewed interest in recent years, which led to a flurry of new research, as well as efficient solvers. An important question is under which conditions a sub-program can be replaced by another one without changing the meaning, in any context. This problem is known as strong equivalence, and is well-studied for ASP. For ELPs, this question has been approached by embedding them into epistemic extensions of equilibrium logics. In this paper, we consider a simpler, more direct characterization that is directly applicable to the language used in state-of-the-art ELP solvers. This also allows us to give tight complexity bounds, showing that strong equivalence for ELPs remains coNP-complete, as for ASP. We further use our results to provide syntactic characterizations for tautological rules and rule subsumption for ELPs.
IJCAI Conference 2018 Conference Paper
In line with recent work on belief change in fragments of propositional logic, we study belief update in the Horn fragment. We start from the standard KM postulates used to axiomatize belief update operators; these postulates lend themselves to semantic characterizations in terms of partial (resp. total) preorders on possible worlds. Since the Horn fragment is not closed under disjunction, the standard postulates have to be adapted for the Horn fragment. Moreover, a restriction on the preorders (i. e. , Horn compliance) and additional postulates are needed to obtain sensible characterizations for the Horn fragment, and this leads to our main contribution: a representation result which shows that the class of update operators captured by Horn compliant partial (resp. total) preorders over possible worlds is precisely that given by the adapted and augmented Horn update postulates. With these results at hand, we provide concrete Horn update operators and are able to shed light on Horn revision operators based on partial preorders.
SAT Conference 2018 Conference Paper
Abstract In this paper, we introduce a novel algorithm to solve projected model counting ( PMC ). PMC asks to count solutions of a Boolean formula with respect to a given set of projected variables, where multiple solutions that are identical when restricted to the projected variables count as only one solution. Our algorithm exploits small treewidth of the primal graph of the input instance. It runs in time \({\mathcal O}(2^{2^{k+4}} n^2)\) where k is the treewidth and n is the input size of the instance. In other words, we obtain that the problem PMC is fixed-parameter tractable when parameterized by treewidth. Further, we take the exponential time hypothesis (ETH) into consideration and establish lower bounds of bounded treewidth algorithms for PMC, yielding asymptotically tight runtime bounds of our algorithm.
IJCAI Conference 2018 Conference Paper
Abstract dialectical frameworks (ADFs) constitute one of the most powerful formalisms in abstract argumentation. Their high computational complexity poses, however, certain challenges when designing efficient systems. In this paper, we tackle this issue by (i) analyzing the complexity of ADFs under structural restrictions, (ii) presenting novel algorithms which make use of these insights, and (iii) empirically evaluating a resulting implementation which relies on calls to SAT solvers.
IJCAI Conference 2018 Conference Paper
Lifting a preference order on elements of some universe to a preference order on subsets of this universe is often guided by postulated properties the lifted order should have. Well-known impossibility results pose severe limits on when such liftings exist if all non-empty subsets of the universe are to be ordered. The extent to which these negative results carry over to other families of sets is not known. In this paper, we consider families of sets that induce connected subgraphs in graphs. For such families, common in applications, we study whether lifted orders satisfying the well-studied axioms of dominance and (strict) independence exist for every or, in another setting, only for some underlying order on elements (strong and weak orderability). We characterize families that are strongly and weakly orderable under dominance and strict independence, and obtain a tight bound on the class of families that are strongly orderable under dominance and independence.
IJCAI Conference 2018 Conference Paper
Epistemic Logic Programs (ELPs) are an extension of Answer Set Programming (ASP) with epistemic operators that allow for a form of meta-reasoning, that is, reasoning over multiple possible worlds. Existing ELP solving approaches generally rely on making multiple calls to an ASP solver in order to evaluate the ELP. However, in this paper, we show that there also exists a direct translation from ELPs into non-ground ASP with bounded arity. The resulting ASP program can thus be solved in a single shot. We then implement this encoding method, using recently proposed techniques to handle large, non-ground ASP rules, into a prototype ELP solving system. This solver exhibits competitive performance on a set of ELP benchmark instances.
IJCAI Conference 2018 Conference Paper
We study a type of change on knowledge bases inspired by the dynamics of formal argumentation systems, where the goal is to enforce acceptance of certain arguments. We put forward that enforcing acceptance of arguments can be viewed as a member of the wider family of belief change operations, and that an axiomatic treatment of it is therefore desirable. In our case, laying down axioms enables a precise account of the close connection between enforcing arguments and belief revision. Our analysis of enforcing arguments proceeds by (i) axiomatizing it as an operation in propositional logic and providing a representation result in terms of rankings on sets of interpretations, (ii) showing that it stands in close relationship to belief revision, and (iii) using it as a gateway towards a principled treatment of enforcement in abstract argumentation.
KR Conference 2018 Short Paper
to determine whether the semantics of the program w. r. t. Forgetting, or the elimination of middle variables no longer deemed relevant, has recently gained considerable interest in the context of Answer Set Programming (ASP), notably due to the formalization of strong persistence, a property based on strong equivalence between the program and the result of forgetting modulo the atoms being eliminated, which seems to adequately encode the requirements of the forgetting operation. Whereas it has been shown that in general, in ASP, it is not always possible to forget and obey strong persistence, the structure of modules in the form of DLP-functions, namely their restricted interface, invites the investigation of a weaker notion of persistence based on uniform equivalence.
AAAI Conference 2018 Conference Paper
Abstract Dialectical Frameworks (ADFs) generalize Dung’s argumentation frameworks allowing various relationships among arguments to be expressed in a systematic way. We further generalize ADFs so as to accommodate arbitrary acceptance degrees for the arguments. This makes ADFs applicable in domains where both the initial status of arguments and their relationship are only insufficiently specified by Boolean functions. We define all standard ADF semantics for the weighted case, including grounded, preferred and stable semantics. We illustrate our approach using acceptance degrees from the unit interval and show how other valuation structures can be integrated. In each case it is sufficient to specify how the generalized acceptance conditions are represented by formulas, and to specify the information ordering underlying the characteristic ADF operator. We also present complexity results for problems related to weighted ADFs.
IJCAI Conference 2017 Conference Paper
We introduce a parametrized equivalence notion for abstract argumentation that subsumes standard and strong equivalence as corner cases. Under this notion, two argumentation frameworks are equivalent if they deliver the same extensions under any addition of arguments and attacks that do not affect a given set of core arguments. As we will see, this notion of equivalence nicely captures the concept of local simplifications. We provide exact characterizations and complexity results for deciding our new notion of equivalence.
FLAP Journal 2017 Journal Article
JAIR Journal 2017 Journal Article
Dynamic Programming (DP) over tree decompositions is a well-established method to solve problems - that are in general NP-hard - efficiently for instances of small treewidth. Experience shows that (i) heuristically computing a tree decomposition has negligible runtime compared to the DP step; and (ii) DP algorithms exhibit a high variance in runtime when using different tree decompositions; in fact, given an instance of the problem at hand, even decompositions of the same width might yield extremely diverging runtimes. We thus propose here a novel and general method that is based on selection of the best decomposition from an available pool of heuristically generated ones. For this purpose, we require machine learning techniques that provide automated selection based on features of the decomposition rather than on the actual problem instance. Thus, one main contribution of this work is to propose novel features for tree decompositions. Moreover, we report on extensive experiments in different problem domains which show a significant speedup when choosing the tree decomposition according to this concept over simply using an arbitrary one of the same width.
IJCAI Conference 2017 Conference Paper
Several computational problems of abstract argumentation frameworks (AFs) such as skeptical and credulous reasoning, existence of a non-empty extension, verification, etc. have been thoroughly analyzed for various semantics. In contrast, the enumeration problem of AFs (i. e. , the problem of computing all extensions according to some semantics) has been left unexplored so far. The goal of this paper is to fill this gap. We thus investigate the enumeration complexity of AFs for a large collection of semantics and, in addition, consider the most common structural restrictions on AFs.
AAAI Conference 2017 Conference Paper
Powerful formalisms for abstract argumentation have been proposed. Their complexity is often located beyond NP and ranges up to the third level of the polynomial hierarchy. The combined complexity of Answer-Set Programming (ASP) exactly matches this complexity when programs are restricted to predicates of bounded arity. In this paper, we exploit this coincidence and present novel efficient translations from abstract dialectical frameworks (ADFs) and GRAPPA to ASP. We also empirically compare our approach to other systems for ADF reasoning and report promising results.
IJCAI Conference 2017 Conference Paper
In this paper, we aim to study how the performance of modern answer set programming (ASP) solvers is influenced by the treewidth of the input program and to investigate the consequences of this relationship. We first perform an experimental evaluation that shows that the solving performance is heavily influenced by the treewidth, given ground input programs that are otherwise uniform, both in size and construction. This observation leads to an important question for ASP, namely, how to design encodings such that the treewidth of the resulting ground program remains small. To this end, we define the class of connection-guarded programs, which guarantees that the treewidth of the program after grounding only depends on the treewidth (and the degree) of the input instance. In order to obtain this result, we formalize the grounding process using MSO transductions.
IJCAI Conference 2016 Conference Paper
Answer Set Programming (ASP) has recently been employed to specify and run dynamic programming (DP) algorithms on tree decompositions, a central approach in the field of parameterized complexity, which aims at solving hard problems efficiently for instances of certain structure. This ASP-based method followed the standard DP approach where tables are computed in a bottom-up fashion, yielding good results for several counting or enumeration problems. However, for optimization problems this approach lacks the possibility to report solutions before the optimum is found, and for search problems it often computes a lot of unnecessary rows. In this paper, we present a novel ASP-based system allowing for "lazy" DP, which utilizes recent multi-shot ASP technology. Preliminary experimental results show that this approach not only yields better performance for search problems, but also outperforms some state-of-the-art ASP encodings for optimization problems in terms of anytime computation, i. e. , measuring the quality of the best solution after a certain timeout.
ECAI Conference 2016 Conference Paper
Merging is one of the central operations in the field of belief change, which is concerned with aggregating the opinions of individuals. Representation theorems provide a family of merging operators satisfying some natural desiderata for merging beliefs. However, little is known about how these operators can be further distinguished. In the field of social choice, on the other hand, numerous properties have been proposed in order to classify voting rules. In this work, we adapt these properties to the context of merging and investigate how they relate to the standard postulates. Our results thus lead to a more fine-grained classification of merging operators and shed light on the question of which particular merging operator is best suited in a concrete application domain.
ECAI Conference 2016 Conference Paper
Disjunctive Answer Set Programming (ASP) is a powerful declarative programming paradigm whose main decision problems are located on the second level of the polynomial hierarchy. Identifying tractable fragments and developing efficient algorithms for such fragments are thus important objectives in order to complement the sophisticated ASP systems available to date. Hard problems can become tractable if some problem parameter is bounded by a fixed constant; such problems are then called fixed-parameter tractable (FPT). While several FPT results for ASP exist, parameters that relate to directed or signed graphs representing the program at hand have been neglected so far. In this paper, we first give some negative observations showing that directed width measures on the dependency graph of a program do not lead to FPT results. We then consider the graph parameter of signed clique-width and present a novel dynamic programming algorithm that is FPT w. r. t. this parameter. Clique-width is more general than the well-known treewidth, and, to the best of our knowledge, ours is the first FPT algorithm for bounded clique-width for reasoning problems beyond SAT.
IJCAI Conference 2016 Conference Paper
Understanding the behavior of belief change operators for fragments of classical logic has received increasing interest over the last years. Results in this direction are mainly concerned with adapting representation theorems. However, fragment-driven belief change also leads to novel research questions. In this paper we propose the concept of belief distribution, which can be understood as the reverse task of merging. More specifically, we are interested in the following question: given an arbitrary knowledge base K and some merging operator Δ , can we find a profile E and a constraint μ , both from a given fragment of classical logic, such that Δ μ (E) yields a result equivalent to K? In other words, we are interested in seeing if K can be distributed into knowledge bases of simpler structure, such that the task of merging allows for a reconstruction of the original knowledge. Our initial results show that merging based on drastic distance allows for an easy distribution of knowledge, while the power of distribution for operators based on Hamming distance relies heavily on the fragment of choice.
IJCAI Conference 2016 Conference Paper
Understanding the relation between different semantics in abstract argumentation is an important issue, not least since such semantics capture the basic ingredients of different approaches to nonmonotonic reasoning. The question we are interested in relates two semantics as follows: What are the necessary and sufficient conditions, such that we can decide, for any two sets of extensions, whether there exists an argumentation framework which has exactly the first extension set under one semantics, and the second extension set under the other semantics. We investigate in total nine argumentation semantics and give a nearly complete landscape of exact characterizations. As we shall argue, such results not only give an account on the independency between semantics, but might also prove useful in argumentation systems by providing guidelines for how to prune the search space.
LOPSTR Conference 2016 Conference Paper
Abstract State-of-the-art answer set programming (ASP) solvers rely on a program called a grounder to convert non-ground programs containing variables into variable-free, propositional programs. The size of this grounding depends heavily on the size of the non-ground rules, and thus, reducing the size of such rules is a promising approach to improve solving performance. To this end, in this paper we announce lpopt, a tool that decomposes large logic programming rules into smaller rules that are easier to handle for current solvers. The tool is specifically tailored to handle the standard syntax of the ASP language (ASP-Core) and makes it easier for users to write efficient and intuitive ASP programs, which would otherwise often require significant hand-tuning by expert ASP engineers. It is based on an idea proposed by Morak and Woltran (2012) that we extend significantly in order to handle the full ASP syntax, including complex constructs like aggregates, weak constraints, and arithmetic expressions. We present the algorithm, the theoretical foundations on how to treat these constructs, as well as an experimental evaluation showing the viability of our approach.
KR Conference 2016 Conference Paper
2012; Bisquert et al. 2011; 2013; Boella, Kaci, and van der Formalizing dynamics of argumentation has received increasing attention over the last years. While AGMlike representation results for revision of argumentation frameworks (AFs) are now available, similar results for the problem of merging are still missing. In this paper, we close this gap and adapt model-based propositional belief merging to define extension-based merging operators for AFs. We state an axiomatic and a constructive characterization of merging operators through a family of rationality postulates and a representation theorem. Then we exhibit merging operators which satisfy the postulates. In contrast to the case of revision, we observe that obtaining a single framework as result of merging turns out to be a more subtle issue. Finally, we establish links between our new results and previous approaches to merging of AFs, which mainly relied on axioms from Social Choice Theory, but lacked AGM-like representation theorems.
AIJ Journal 2016 Journal Article
KR Conference 2016 Conference Paper
Abstract argumentation frameworks (AFs) are one of the central formalisms in AI; equipped with a wide range of semantics, they have proven useful in several application domains. We contribute to the systematic analysis of semantics for AFs by connecting two recent lines of research – the work on input/output frameworks and the study of the expressiveness of semantics. We do so by considering the following question: given a function describing an input/output behaviour by mapping extensions (resp. labellings) to sets of extensions (resp. labellings), is there an AF with designated input and output arguments realizing this function under a given semantics? For the major semantics we give exact characterizations of the functions which are realizable in this manner.
ECAI Conference 2016 Conference Paper
In this paper we introduce a new approach for revising and merging consistent Horn formulae under minimal model semantics. Our approach is translation-based in the following sense: we generate a propositional encoding capturing both the syntax of the original Horn formulae (the clauses which appear or not in them) and their semantics (their minimal models). We can then use any classical revision or merging operator to perform belief change on the encoding. The resulting propositional theory is then translated back into a Horn formula. We identify some specific operators which guarantee a particular kind of minimal change. A unique feature of our approach is that it allows us to control whether minimality of change primarily relates to the syntax or to the minimal model semantics of the Horn formula. We give an axiomatic characterization of minimal change on the minimal model for this new setting, and we show that some specific translation-based revision and merging operators satisfy our postulates.
IJCAI Conference 2015 Conference Paper
Argumentation is an inherently dynamic process. Consequently, recent years have witnessed tremendous research efforts towards an understanding of how the seminal AGM theory of belief change can be applied to argumentation, in particular for Dung’s abstract argumentation frameworks (AFs). However, none of the attempts has yet succeeded in handling the natural situation where the revision of an AF is guaranteed to be representable by an AF as well. In this work, we present a generic solution to this problem which applies to many prominent I-maximal argumentation semantics. In order to prove a full representation theorem, we make use of recent advances in both areas of argumentation and belief change. In particular, we utilize the concepts of realizability in argumentation and the notion of compliance as used in Horn revision.
AIJ Journal 2015 Journal Article
IJCAI Conference 2015 Conference Paper
IJCAI Conference 2015 Conference Paper
Dynamic Programming (DP) over tree decompositions is a well-established method to solve problems – that are in general NP-hard – efficiently for instances of small treewidth. Experience shows that (i) heuristically computing a tree decomposition has negligible runtime compared to the DP step; (ii) DP algorithms exhibit a high variance in runtime when using different tree decompositions; in fact, given an instance of the problem at hand, even decompositions of the same width might yield extremely diverging runtimes. We thus propose here a novel and general method that is based on a selection of the best decomposition from an available pool of heuristically generated ones. For this purpose, we require machine learning techniques based on features of the decomposition rather than on the actual problem instance. We report on extensive experiments in different problem domains which show a significant speedup when choosing the tree decomposition according to this concept over simply using an arbitrary one of the same width.
IJCAI Conference 2015 Conference Paper
Belief merging is a central operation within the field of belief change and addresses the problem of combining multiple, possibly mutually inconsistent knowledge bases into a single, consistent one. A current research trend in belief change is concerned with tailored representation theorems for fragments of logic, in particular Horn logic. Hereby, the goal is to guarantee that the result of the change operations stays within the fragment under consideration. While several such results have been obtained for Horn revision and Horn contraction, merging of Horn theories has been neglected so far. In this paper, we provide a novel representation theorem for Horn merging by strengthening the standard merging postulates. Moreover, we present a concrete Horn merging operator satisfying all postulates.
AIJ Journal 2015 Journal Article
IJCAI Conference 2015 Conference Paper
Parameterized complexity is a well recognized vehicle for understanding the multitude of complexity AI problems typically exhibit. However, the prominent problem of belief revision has not undergone a systematic investigation in this direction yet. This is somewhat surprising, since by its very nature of involving a knowledge base and a revision formula, this problem provides a perfect playground for investigating novel parameters. Among our results on the parameterized complexity of revision is thus a versatile fpt algorithm which is based on the parameter of the number of atoms shared by the knowledge base and the revision formula. Towards identifying the frontier between parameterized tractability and intractability, we also give hardness results for classes such as co-W[1], para-ΘP 2, and FPTNP [f(k)].
ECAI Conference 2014 Conference Paper
Recently, belief change within the framework of fragments of propositional logic has gained increasing attention. Previous works focused on belief contraction and belief revision on the Horn fragment. However, the problem of belief merging within fragments of propositional logic has been neglected so far. This paper presents a general approach to define new merging operators derived from existing ones such that the result of merging remains in the fragment under consideration. Our approach is not limited to the case of Horn fragment but applicable to any fragment of propositional logic characterized by a closure property on the sets of models of its formulæ . We study the logical properties of the proposed operators in terms of satisfaction of merging postulates, considering in particular distance-based merging operators for Horn and Krom fragments.
KR Conference 2014 Conference Paper
and gives the collection of all possible sets of extensions an AF can possess under semantics σ. We shall focus on several important semantics namely naive, preferred, semistable, stage, stable, and complete semantics (Dung 1995; Verheij 1996; Caminada, Carnielli, and Dunne 2012) and aim at finding simple criteria to decide whether a set S is contained in Σσ. For instance, we will show that each S ∈ Σpref satisfies the condition that for each pair of distinct sets A and B from S there is at least one a ∈ A and b ∈ B such that a, b do not occur together in any set in S. Thus, for instance, S = {{a, b}, {c, d}, {a, c}, {b, d}} is part of the signature Σpref while neither S ∪ {{a, d}} nor S ∪ {{b, c}} are. This fact can be exploited in a search procedure for enumerating preferred extensions: assume for a given AF F, the three extensions from S have already been calculated as preferred extensions of F. The procedure can now restrict the search space to find further extensions of F (if they exist) to sets with at least one argument different from a, b, c, and d. The problem we study here is also essential in many other aspects. First, our results are important for constructing AFs. Indeed, knowing whether a set S is contained in Σσ is a necessary condition which should be checked before actually looking for an AF F which realizes S under σ, i. e. σ(F) = S. This is of high importance when dynamic aspects of argumentation are considered (Falappa et al. 2011). As an example, suppose a framework F possesses as its σ-extensions a set S and one asks for an adaptation of the framework F such that its σ-extensions are given by S ∪ {E}, i. e. one extension is to be added. The addition of E to S may, for instance, be desired by some agent on the grounds that E contains some subset of arguments which it wishes to be collectively accepted by other agents: no extension in S, however, provides support for the subset of interest to be considered justifiable. Furthermore the agent wishing to add E is reluctant to jeopardize the chance of this happening if the modified AF is such that some existing element of S ceases to be an extension: such an outcome being likely to prejudice other agents against agreeing to changes which admit E. Before considering the adapted framework’s structure, it is obviously crucial to know whether an appropriate framework exists at all, i. e. whether S ∪ {E} ∈ Σσ. In a recent paper on revision of AF s (Coste-Marquis et al. 2013), the authors circumvent this issue by allowing revision to result in a set of AFs such that The study of extension-based semantics within the seminal abstract argumentation model of Dung has largely focused on definitional, algorithmic and complexity issues. In contrast, matters relating to comparisons of representational limits, in particular, the extent to which given collections of extensions are expressible within the formalism, have been under-developed. As such, little is known concerning conditions under which a candidate set of subsets of arguments are “realistic” in the sense that they correspond to the extensions of some argumentation framework AF for a semantics of interest. In this paper we present a formal basis for examining extension-based semantics in terms of the sets of extensions that these may express within a single AF. We provide a number of characterization theorems which guarantee the existence of AFs whose set of extensions satisfy specific conditions and derive preliminary complexity results for decision problems that require such characterizations.
ECAI Conference 2014 Conference Paper
Abstract argumentation frameworks (AFs) are one of the most studied formalisms in AI. In this work, we introduce a certain subclass of AFs which we call compact. Given an extension-based semantics, the corresponding compact AFs are characterized by the feature that each argument of the AF occurs in at least one extension. This not only guarantees a certain notion of fairness; compact AFs are thus also minimal in the sense that no argument can be removed without changing the outcome. We address the following questions in the paper: (1) How are the classes of compact AFs related for different semantics? (2) Under which circumstances can AFs be transformed into equivalent compact ones? (3) Finally, we show that compact AFs are indeed a non-trivial subclass, since the verification problem remains coNP-hard for certain semantics.
AIJ Journal 2014 Journal Article
IS Journal 2014 Journal Article
This article provides a short survey of some of the most popular abstract argumentation frameworks available today. The authors present the general idea of abstract argumentation, highlighting the role of abstract frameworks in the argumentation process, and review the original Dung frameworks and their semantics. A discussion of generalizations of these frameworks follows, focusing on structures taking preferences and values into account and approaches in which not only attack but also support relations can be modeled. Finally, the authors review the concept of abstract dialectical frameworks, one of the most general systems for abstract argumentation providing a flexible, principled representation of arbitrary argument relations.
ECAI Conference 2014 Conference Paper
Graphical models are widely used in argumentation to visualize relationships among propositions or arguments. The intuitive meaning of the links in the graphs is typically expressed using labels of various kinds. In this paper we introduce a general semantical framework for assigning a precise meaning to labelled argument graphs which makes them suitable for automatic evaluation. Our approach rests on the notion of explicit acceptance conditions, as first studied in Abstract Dialectical Frameworks (ADFs). The acceptance conditions used here are functions from multisets of labels to truth values. We define various Dung style semantics for argument graphs. We also introduce a pattern language for specifying acceptance functions. Moreover, we show how argument graphs can be compiled to ADFs, thus providing an automatic evaluation tool via existing ADF implementations. Finally, we also discuss complexity issues.
JELIA Conference 2014 Conference Paper
Abstract Complex reasoning problems over large amounts of data pose a great challenge for computer science. To overcome the obstacle of high computational complexity, exploiting structure by means of tree decompositions has proved to be effective in many cases. However, the implementation of suitable efficient algorithms is often tedious. D-FLAT is a software system that combines the logic programming language Answer Set Programming with problem solving on tree decompositions and can serve as a rapid prototyping tool for such algorithms. Since we initially proposed D-FLAT, we have made major changes to the system, improving its range of applicability and its usability. In this paper, we present the system resulting from these efforts.
IJCAI Conference 2013 Conference Paper
We present various new concepts and results related to abstract dialectical frameworks (ADFs), a powerful generalization of Dung’s argumentation frameworks (AFs). In particular, we show how the existing definitions of stable and preferred semantics which are restricted to the subcase of so-called bipolar ADFs can be improved and generalized to arbitrary frameworks. Furthermore, we introduce preference handling methods for ADFs, allowing for both reasoning with and about preferences. Finally, we present an implementation based on an encoding in answer set programming.
AAAI Conference 2013 Conference Paper
We introduce abstract preference frameworks to study general properties common across a variety of preference formalisms. In particular, we study strong equivalence in preference formalisms and their separability. We identify abstract postulates on preference frameworks, satisfied by most of the currently studied preference formalisms, that lead to characterizations of both properties of interest.
IJCAI Conference 2013 Conference Paper
Many AI-related reasoning problems are based on the problem of satisfiability (SAT). While SAT itself becomes easy when restricting the structure of the formulas in a certain way, this is not guaranteed for more involved reasoning problems. In this work, we focus on reasoning tasks in the areas of belief revision and logic-based abduction and show that in some cases the restriction to Krom formulas (i. e. , formulas in CNF where clauses have at most two literals) decreases the complexity, while in others it does not. We thus also consider additional restrictions to Krom formulas towards a better identification of the tractability frontier of such problems.
AIJ Journal 2013 Journal Article
KR Conference 2012 Conference Paper
Nebel 1998; Liberatore and Schaerf 2001). However, as far as we know, only few works have focused on belief revision within the framework of fragments of propositional logic, except for the Horn case (Delgrande and Peppas 2011; Wu, Zhang, and Zhang 2011; Van De Putte 2011). The study of belief change within language fragments is motivated by two central observations: • In many applications, the language is restricted a priori. For instance, a rule-based formalization of expert knowledge is much easier to handle for standard users. In case users want to revise some rules, they indeed expect that the outcome is still in the easy-to-read format they are used to. • Many fragments of propositional logic allow for efficient reasoning methods. Suppose an agent who frequently has to answer queries about his beliefs. This should be done efficiently thus the beliefs are stored as a formula known to be in a tractable class. In case the beliefs of the agent are undergoing a revision, it is desired that the result of such an operation yields a formula in the same fragment. Hence, the agent still can use the dedicated solving method he is equipped with for this fragment. In case such changes are performed rarely, we do not bother whether the revision itself can be performed efficiently, but it is more important that the outcome can still be evaluated efficiently. It seems thus natural to investigate how known operators can be refined such that they work properly within a language fragment. The main obstacle hereby is that for a language fragment L0, given formulas1 ψ, µ ∈ L0 there is no guarantee that the outcome of a revision, ψ ◦ µ, remains in L0 as well. Let, for example, ψ = a ∧ b and µ = ¬a ∨ ¬b, be formulas expressed in conjunctive normal form (CNF) with Horn clauses (at most one positive literal), revising ψ by µ using Dalal’s revision operator (Dalal 1988) does not remain in the Horn language fragment since (a ∨ b) ∧ (¬a ∨ ¬b) belongs to the result of the revision. The natural question arises whether there exist refinements? of ◦ such that ψ? µ ∈ L0 always holds, but properties of ◦ are retained whenever possible. For instance, for such a refined operator it seems reasonable that ψ? µ is equivalent to ψ ◦ µ whenever ψ ◦ µ Belief revision has been extensively studied in the framework of propositional logic, but just recently revision within fragments of propositional logic has gained attention. Hereby it is not only the belief set and the revision formula which are given within a certain language fragment, but also the result of the revision has to be located in the same fragment. So far, research in this direction has been mainly devoted to the Horn fragment of classical logic. In this work, we present a general approach to define new revision operators derived from known operators (as for instance, Satoh’s and Dalal’s revision operators), such that the result of the revision remains in the fragment under consideration. Our approach is not limited to the Horn case but applicable to any fragment of propositional logic where the models of the formulas are closed under a Boolean function. Thus we are able to uniformly treat cases as dual-Horn, Krom and affine formulas, as well.
KR Conference 2012 Conference Paper
of this paper is both on the identification of such lowercomplexity fragments of second-level reasoning problems arising from abstract argumentation, and on exploiting this knowledge in developing efficient complexity-sensitive decision procedures for the generic second-level problems. Tractable (i. e., polynomial-time decidable) fragments have been quite thoroughly studied in the literature (see, e. g., (Coste-Marquis, Devred, and Marquis 2005; Dunne 2007; Dvořák, Szeider, and Woltran 2010; Dvořák, Pichler, and Woltran 2011; Ordyniak and Szeider 2011)). However, there is only little work on the identification of fragments which are located on the first level (NP-coNP layer), that is, inbetween tractability and full second-level complexity. Identification of first-level fragments of second-level reasoning tasks is important due to several reasons. First, from a theoretical point of view, such fragments show particular (but not all) sources of complexity of the considered problems and pave the way towards “trichotomy”-like results (e. g. (Truszczynski 2011) in the context of answer-set programming). Second, NP fragments can be efficiently reduced to the problem of satisfiability in classical propositional logic (SAT). This allows for realizations of argumentation procedures by employing highly sophisticated SAT solver technology in reasoning on argumentation problems. Going even further, we aim at designing decision procedures for larger fragments based on decision procedures developed for an NP-fragment, using the NP decision procedures as an NP oracle in an iterative fashion. Such procedures fall under the general counter-example guided abstraction refinement (CEGAR) approach originating from the field of model checking (Clarke et al. 2003; Clarke, Gupta, and Strichman 2004). For problems complete for the second level of the polynomial hierarchy, this leads to a general procedure which, in the worst case, requires an exponential number of calls to the NP oracle, which is indeed unavoidable under the assumption that the polynomial hierarchy does not collapse. Nevertheless, such procedures can be designed to behave adequately on input instances that fall into the considered NP fragment and on instances for which a relatively low number of oracle calls is sufficient. As a generic notion, we say that such a procedure is complexity-sensitive w. r. t. the NP fragment at hand. For instance, for the second level problem of answer-set existence for disjunctive logic programs, the successful loop-formula Abstract argumentation frameworks (AFs) provide the basis for various reasoning problems in the areas of Knowledge Representation and Artificial Intelligence. Efficient evaluation of AFs has thus been identified as an important research challenge. So far, implemented systems for evaluating AFs have either followed a straight-forward reduction-based approach or been limited to certain tractable classes of AFs. In this work, we present a generic approach for reasoning over AFs, based on the novel concept of complexity-sensitivity. Establishing the theoretical foundations of this approach, we derive several new complexity results for preferred, semistable and stage semantics which complement the current complexity landscape for abstract argumentation, providing further understanding on the sources of intractability of AF reasoning problems. The introduced generic framework exploits decision procedures for problems of lower complexity whenever possible. This allows, in particular, instantiations of the generic framework via harnessing in an iterative way current sophisticated Boolean satisfiability (SAT) solver technology for solving the considered AF reasoning problems. First experimental results show that the SAT-based instantiation of our novel approach outperforms existing systems.
KR Conference 2012 Conference Paper
Representing and reasoning about preferences in qualitative settings is an important research area for knowledge representation and qualitative decision theory. The main objectives are to design expressive yet intuitive languages to model preferences, and to develop automated methods to reason about formal representations of preferences in these languages. The literature on the subject of preferences is vast. We refer the reader to the special issue of Artificial Intelligence Magazine (Goldsmith and Junker 2008) for a collection of overview articles and references. Understanding when optimization problems are equivalent, in particular, when one can be interchanged with another within any larger context, is fundamental to any preference formalism. Speaking informally, optimization problems P and Q are interchangeable or strongly equivalent when for every optimization problem R (context), P ∪ R and Q ∪ R define the same optimal models. Understanding when one optimization problem is equivalent to another in this sense is essential for preference analysis, modular preference representation, and rewriting techniques to simplify optimization problems into forms more amenable to processing, without changing any of their inherent properties. Let us consider a multi-agent setting, in which agents combine their preferences on some set of alternatives with the goal of identifying optimal ones. Can one agent in the ensemble be replaced with another so that the set of optimal alternatives is unaffected not only now, but also under any extension of the ensemble in the future? Strong equivalence of agents’ optimization problems is precisely what is needed to guarantee this full interchangeability property! The notion of strong equivalence is of general interest, by no means restricted to preference formalisms. In some cases, most notably for classical logic, it coincides with equivalence, the property of having the same models. However, if the semantics is not monotone, that is, extending the theory may introduce new models, not only eliminate some, strong equivalence becomes a strictly stronger concept, and the one to adopt if theories being analyzed are to be placed within a larger context. The nonmonotonicity of the semantics is the salient feature of nonmonotonic logics (Marek and Truszczyński 1993) and strong equivalence of theories in nonmonotonic logics, especially logic programming with the answer-set semantics (Gelfond and Lifschitz 1991), was extensively studied in that set- We introduce the framework of qualitative optimization problems (or, simply, optimization problems) to represent preference theories. The formalism uses separate modules to describe the space of outcomes to be compared (the generator) and the preferences on outcomes (the selector). We consider two types of optimization problems. They differ in the way the generator, which we model by a propositional theory, is interpreted: by the standard propositional logic semantics, and by the equilibrium-model (answer-set) semantics. Under the latter interpretation of generators, optimization problems directly generalize answer-set optimization programs proposed previously. We study strong equivalence of optimization problems, which guarantees their interchangeability within any larger context. We characterize several versions of strong equivalence obtained by restricting the class of optimization problems that can be used as extensions and establish the complexity of associated reasoning tasks. Understanding strong equivalence is essential for modular representation of optimization problems and rewriting techniques to simplify them without changing their inherent properties.
AIJ Journal 2012 Journal Article
AIJ Journal 2011 Journal Article
IJCAI Conference 2011 Conference Paper
The concept of "ideal semantics" has been promoted as an alternative basis for skeptical reasoning within abstract argumentation settings. Informally, ideal acceptance not only requires an argument to be skeptically accepted in the traditional sense but further insists that the argument is in an admissible set all of whose arguments are also skeptically accepted. The original proposal was couched in terms of the so-called preferred semantics for abstract argumentation. We argue, in this paper, that the notion of "deal acceptability'' is applicable to arbitrary semantics and justify this claim by showing that standard properties of classical ideal semantics, e. g. unique status, continue to hold in any "reasonable" extension-based semantics. We categorise the relationship between the divers concepts of "ideal extension wrt semantics s" that arise and we present a comprehensive analysis of algorithmic and complexity-theoretic issues.
IJCAI Conference 2011 Conference Paper
One criticism often advanced against abstract argumentation frameworks (AFs), is that these consider only one form of interaction between atomic arguments: specifically that an argument attacks another. Attempts to broaden the class of relationships include bipolar frameworks, where arguments support others, and abstract dialectical frameworks (ADFs). The latter, allow "acceptance'' of an argument, x, to be predicated on a given propositional function, C_x, dependent on the corresponding acceptance of its parents, i. e. those y for which occurs. Although offering a richly expressive formalism subsuming both standard and bipolar AFs, an issue that arises with ADFs is whether this expressiveness is achieved in a manner that would be infeasible within standard AFs. Can the semantics used in ADFs be mapped to some AF semantics? How many arguments are needed in an AF to "simulate'' an ADF? We show that (in a formally defined sense) any ADF can be simulated by an AF of similar size and that this translation can be realised by a polynomial time algorithm.
JELIA Conference 2010 Conference Paper
Abstract We present a novel system for propositional Answer-Set Programming (ASP). This system, called dynASP, is based on dynamic programming and thus significantly differs from standard ASP-solvers which implement techniques stemming from SAT or CSP.
KR Conference 2010 Conference Paper
Since argumentation is an inherently dynamic process, it is of great importance to understand the effect of incorporating new information into given argumentation frameworks. In this work, we address this issue by analyzing equivalence between argumentation frameworks under the assumption that the frameworks in question are incomplete, i. e. further information might be added later to both frameworks simultaneously. In other words, instead of the standard notion of equivalence (which holds between two frameworks, if they possess the same extensions), we require here that frameworks F and G are also equivalent when conjoined with any further framework H. Due to the nonmonotonicity of argumentation semantics, this concept is different to (but obviously implies) the standard notion of equivalence. We thus call our new notion strong equivalence and study how strong equivalence can be decided with respect to the most important semantics for abstract argumentation frameworks. We also consider variants of strong equivalence in which we define equivalence with respect to the sets of arguments credulously (or skeptically) accepted, and restrict strong equivalence to augmentations H where no new arguments are raised. a b c a b c which are equivalent under most of the known semantics. Let us use the preferred semantics here, which selects maximal conflict-free and self-defending sets of arguments (a formal definition is given in the next section). Then, both frameworks have the same unique preferred extension, namely the empty set. However, if we add a new argument which attacks b, the situation becomes different: a b c a d b c d If we let H = ({b, d}, {(d, b)}), the AF on the left can be considered as the “union” of F and H which we will denote F ∪H, slightly abusing notation. The AF on the right is then G ∪ H. However, F ∪ H and G ∪ H are not equivalent. In fact, {c, d} is the unique preferred extension of F ∪ H and {a, d} is the unique preferred extension of G ∪ H. However, it is not even necessary to add a new argument to make this implicit difference between F and G explicit. Consider now H = ({a, b}, {(a, b)}). Then F ∪ H and G ∪ H are
AAAI Conference 2010 Conference Paper
We study methods to specify preferences among subsets of a set (a universe). The methods we focus on are of two types. The first one assumes the universe comes with a preference relation on its elements and attempts to lift that relation to subsets of the universe. That approach has limited expressivity but results in orderings that capture interesting general preference principles. The second method consists of developing formalisms allowing the user to specify “atomic” improvements, and generating from them preferences on the powerset of the universe. We show that the particular formalism we propose is expressive enough to capture the lifted preference relations of the first approach, and generalizes propositional CP-nets. We discuss the importance of domain-independent methods for specifying preferences on sets for knowledge representation formalisms, selecting the formalism of argumentation frameworks as an illustrative example.
JELIA Conference 2010 Conference Paper
Abstract Many proposals for logic-based formalizations of argumentation consider an argument as a pair (Φ, α ), where the support Φ is understood as a minimal consistent subset of a given knowledge base which has to entail the claim α. In most scenarios, arguments are given in the full language of classical propositional logic which makes reasoning in such frameworks a computationally costly task. For instance, the problem of deciding whether there exists a support for a given claim has been shown to be \(\Sigma^\mathrm{p}_2\) -complete. In order to better understand the sources of complexity (and to identify tractable fragments), we focus on arguments given over formulae in which the allowed connectives are taken from certain sets of Boolean functions. We provide a complexity classification for four different decision problems (existence of a support, checking the validity of an argument, relevance and dispensability) with respect to all possible sets of Boolean functions.
ECAI Conference 2010 Conference Paper
Logic-based abduction is an important reasoning method with many applications in Artificial Intelligence including diagnosis, planning, and configuration. The goal of an abduction problem is to find a "solution", i. e. , an explanation for some observed symptoms. Usually, many solutions exist, and one is often interested in minimal ones only. Previous definitions of "solutions" to an abduction problem tacitly made an open-world assumption. However, as far as minimality is concerned, this assumption may not always lead to the desired behavior. To overcome this problem, we propose a new definition of solutions based on a closed-world approach. Moreover, we also introduce a new variant of minimality where only a part of the hypotheses is subject to minimization. A thorough complexity analysis reveals the close relationship between these two new notions as well as the differences compared with previous notions of solutions.
KR Conference 2010 Conference Paper
Abstract argumentation frameworks have received a lot of interest in recent years. Most computational problems in this area are intractable but several tractable fragments have been identified. In particular, Dunne showed that many problems can be solved in linear time for argumentation frameworks of bounded tree-width. However, these tractability results, which were obtained via Courcelle’s Theorem, do not directly lead to efficient algorithms. The goal of this paper is to turn the theoretical tractability results into efficient algorithms and to explore the potential of directed notions of tree-width for defining larger tractable fragments.
KR Conference 2010 Conference Paper
Cardinality constraints or, more generally, weight constraints are well recognized as an important extension of answer-set programming. Clearly, all common algorithmic tasks related to programs with cardinality or weight constraints (PWCs) – like checking the consistency of a program – are intractable. Many intractable problems in the area of knowledge representation and reasoning have been shown to become tractable if the treewidth of the programs or formulas under consideration is bounded by some constant. The goal of this paper is to apply the notion of treewidth to PWCs and to identify tractable fragments. It will turn out that the straightforward application of treewidth to PWCs does not suffice to obtain tractability. However, by imposing further restrictions, tractability can be achieved. Main results of the paper • We show that the consistency problem of PWCs remains NP-complete even if the treewidth of the considered programs is bounded by a constant (actually, even if this constant is 1). Hence, we have to search for further restrictions on the PWCs to ensure tractability. • We thus consider the largest integer occurring in (lower and upper) bounds of the constraints in the PWC, and call this parameter constraint-width. If also the constraint-width is bounded by an arbitrary but fixed constant, then the consistency problem of PWCs becomes linear time tractable (the bound on the running time entails a constant factor that is exponential in constraint-width and treewidth). • For PCCs (i. e., PWCs where all weights are equal to 1) we obtain non-uniform polynomial time tractability by designing a new dynamic programming algorithm, i. e.: Let w denote the treewidth of a PCC and let n denote the size of the PCC. Then our algorithm works in time O(f (w) · n2w) for some function f that only depends on the treewidth, but not on the size n of the program. The term “non-uniform” refers to the factor n2w in the time bound, where the size n of the program is raised to the power of an expression that depends on the treewidth w. We shall also discuss further extensions of this dynamic programming algorithm for PCCs, e. g.: it can be used to solve in non-uniform polynomial time the consistency problem of PWCs if the weights are given in unary representation. • Of course, an algorithm for the PCC consistency problem that operates in time O(f (w) · nO(1)) would be preferable, i. e., the parameter w does not occur in the exponent
IJCAI Conference 2009 Conference Paper
In this paper, we present a novel approach to the evaluation of propositional answer-set programs. In particular, for programs with bounded treewidth, our algorithm is capable of (i) computing the number of answer sets in linear time and (ii) enumerating all answer sets with linear delay. Our algorithm relies on dynamic programming. Therefore, our approach significantly differs from standard ASP systems which implement techniques stemming from SAT or CSP, and thus usually do not exploit fixed parameter properties of the programs. We provide first experimental results which underline that, for programs with low treewidth, even a prototypical implementation is competitive compared to stateof-the-art systems.
AIJ Journal 2009 Journal Article
KR Conference 2008 Conference Paper
We address the problem of belief revision in (nonmonotonic) logic programming under answer set semantics: given logic programs P and Q, the goal is to determine a program R that corresponds to the revision of P by Q, denoted P * Q. Unlike previous approaches in logic programming, our formal techniques are analogous to those of distance-based belief revision in propositional logic. In developing our results, we build upon the model theory of logic programs furnished by SE models. Since SE models provide a formal, monotonic characterisation of logic programs, we can adapt well-known techniques from the area of belief revision to revision in logic programs We investigate two specific operators: (logic program) expansion and a revision operator based on the distance between the SE models of logic programs. It proves to be the case that expansion is an interesting operator in its own right, unlike in classical AGM-style belief revision where it is relatively uninteresting. Expansion and revision are shown to satisfy a suite of interesting properties; in particular, our revision operators satisfy the majority of the AGM postulates for revision. A complexity analysis reveals that our revision operators do not increase the complexity of the base formalism. As a consequence, we present an encoding for computing the revision of a logic program by another, within the same logic programming framework.
LPAR Conference 2008 Conference Paper
Abstract Many intractable problems have been shown to become tractable if the treewidth of the underlying structure is bounded by a constant. An important tool for deriving such results is Courcelle’s Theorem, which states that all properties defined by Monadic-Second Order (MSO) sentences are fixed-parameter tractable with respect to the treewidth. Arnborg et al. extended this result to counting problems defined via MSO properties. However, the MSO description of a problem is of course not an algorithm. Consequently, proving the fixed-parameter tractability of some problem via Courcelle’s Theorem can be considered as the starting point rather than the endpoint of the search for an efficient algorithm. Gottlob et al. have recently presented a new approach via monadic datalog to actually devise efficient algorithms for decision problems whose tractability follows from Courcelle’s Theorem. In this paper, we extend this approach and apply it to some fundamental counting problems in logic an artificial intelligence.
KR Conference 2008 Conference Paper
Ordered disjunctions have been introduced as a simple, yet expressive approach for representing preferential knowledge by means of logic programs. The semantics for the resulting language is based on the answer-set semantics, but comes in different variants, depending on the particular interpretation of preference aggregation associated to the ordered disjunction connective. While in standard answer-set programming the question of when a program is to be considered equivalent to another received increasing attention in recent years, this problem has not been addressed for programs with ordered disjunctions so far. In this paper, we discuss the concept of strong equivalence in this setting. We introduce different versions of strong equivalence for programs with ordered disjunctions and provide model-theoretic characterisations, extending well-known ones for strong equivalence between ordinary logic programs. Furthermore, we discuss the relationships between the proposed notions and study their computational complexity.
IJCAI Conference 2007 Conference Paper
Recent research in nonmonotonic logic programming under the answer-set semantics focuses on different notions of program equivalence. However, previous results do not address the important classes of stratified programs and its subclass of acyclic (i. e. , recursion-free) programs, although they are recognized as important tools for knowledge representation and reasoning. In this paper, we consider such programs, possibly augmented with constraints. Our results show that in the propositional setting, where reasoning is well-known to be polynomial, deciding strong and uniform equivalence is as hard as for arbitrary normal logic programs (and thus coNP-complete), but is polynomial in some restricted cases. Non-ground programs behave similarly. However, exponential lower bounds already hold for small programs (i. e. , with constantly many rules). In particular, uniform equivalence is undecidable even for small Horn programs plus a single negative constraint.
ECAI Conference 2006 Conference Paper
Various problems in artificial intelligence (AI) can be solved by translating them into a quantified boolean formula (QBF) and evaluating the resulting encoding. In this approach, a QBF solver is used as a black box in a rapid implementation of a more general reasoning system. Most of the current solvers for QBFs require formulas in prenex conjunctive normal form as input, which makes a further translation necessary, since the encodings are usually not in a specific normal form. This additional step increases the number of variables in the formula or disrupts the formula's structure. Moreover, the most important part of this transformation, prenexing, is not deterministic. In this paper, we focus on an alternative way to process QBFs without these drawbacks and describe a solver, qpro, which is able to handle arbitrary formulas. To this end, we extend algorithms for QBFs to the non-normal form case and compare qpro with the leading normal-form provers on problems from the area of AI.
JELIA Conference 2006 Conference Paper
Abstract Answer-set programming (ASP) has emerged as an important paradigm for declarative problem solving, and provides a host for many different application domains on the basis of nonmonotonic logic programs. The increasing popularity in ASP has raised also the interest in semantic comparisons of programs in ASP [3, 4], which are nowadays recognized as a theoretical basis for program optimization, where equivalencepreserving modifications are of primary interest; in particular, rewriting rules which allow to perform a local change in a program are important. Many such rules have been considered in the propositional setting (cf. , e. g. , [1, 6]) but just recently have been extended to the practicably important case of non-ground programs [2].
JELIA Conference 2006 Conference Paper
Abstract In recent work, a general framework for specifying correspondences between logic programs under the answer-set semantics has been defined. The framework captures different notions of equivalence, including well-known ones like ordinary, strong, and uniform equivalence, as well as refined ones based on the projection of answer sets where not all parts of an answer set are of relevance. In this paper, we describe an implementation to verify program correspondences in this general framework. The system, called cc⊤, relies on linear-time constructible reductions to quantified propositional logic and uses extant solvers for the latter language as back-end inference engines.
KR Conference 2006 Conference Paper
In this paper, we propose a formal framework for specifying rule replacements in nonmonotonic logic programs within the answer-set programming paradigm. Of particular interest are replacement schemas retaining specific notions of equivalence, among them the prominent notions of strong and uniform equivalence, which have been introduced as theoretical tools for program optimization and verification. We derive some general properties of the replacement framework with respect to these notions of equivalence. Moreover, we generalize results about particular replacement schemas which have been established for ground programs to the non-ground case. Finally, we report a number of complexity results which address the problem of deciding how hard it is to apply a replacement to a given program. Our results provide an important step towards the development of effective optimization methods for non-ground answer-set programming, an issue which has not been addressed much so far.
JELIA Conference 2004 Conference Paper
Abstract Recent research in nonmonotonic logic programming focuses on alternative notions of equivalence. In particular, strong and uniform equivalence are both proposed as useful tools to optimize (parts of) a logic program. More specifically, given a set P of program rules and a possible optimization Q, strong (resp. uniform) equivalence requires that adding any set S of rules (resp. facts) to P and Q simultaneously results in equivalent programs, i. e. , P ∪ S and Q ∪ S possess the same stable models. However, in practice it is often necessary to relax this condition in such a way, that dedicated internal atoms in P or Q are no longer allowed to occur in the possible extensions S. In this paper, we consider these relativized notions of both uniform and strong equivalence and provide semantical characterizations by generalizing the notions of UE- and SE-modelhood. These new characterizations capture all notions of equivalence including ordinary equivalence in a uniform way. Finally, we analyze the complexity of the introduced equivalence tests for the important classes of normal and disjunctive logic programs. As a by-product, we reduce the tests for relativized equivalences to ordinary equivalence between two programs. These reductions may serve as a basis for implementation.
KR Conference 2004 Conference Paper
Answer Set Programming has become a host for expressing knowledge representation problems, which reinforces the interest in efficient methods for computing answer sets of a logic program. While for propositional programs, the complexity of this task has been amply studied and is well-understood, less attention has been paid to the case of non-ground programs, which is much more important from a KR language perspective. Existing Answer Set Programming systems employ different representations of models, but the consequences of these representations for answer set computation and reasoning tasks have not been analyzed in detail. In this paper, we present novel complexity results on answer set checking for non-ground programs under two methods for representing answer sets and a variety of syntactic restrictions. In particular, we consider set-based and bitmap-based representations, which are popular in implementations of Answer Set Programming systems. Based on these results, we also derive new complexity results for the canonical reasoning tasks over answer sets, under the assumption that predicate arities are bounded by some constant. Our results imply that in such a setting - which appears to be a reasonable assumption in practice - more efficient implementations than those currently available may be feasible.
NMR Workshop 2004 Conference Paper
We define the class of head-cycle free nested logic programs, and its proper subclass of acyclic nested programs, generalising similar classes originally defined for disjunctive logic programs. We then extend several results known for acyclic and head-cycle free disjunctive programs under the stable model semantics to the nested case. Most notably, we provide a propositional semantics for the program classes under consideration. This generalises different extensions of Fages’ theorem, including a recent result by Erdem and Lifschitz for tight logic programs. We further show that, based on a shifting method, head-cycle free nested programs can be rewritten into normal programs in polynomial time and space, extending a similar technique for head-cycle free disjunctive programs. All this shows that head-cycle free nested programs constitute a subclass of nested programs possessing a lower computational complexity than arbitrary nested programs, providing the polynomial hierarchy does not collapse.
KR Conference 2004 Conference Paper
Disjunction is generally considered to add expressive power to logic programs under the stable model semantics, which have become a popular programming paradigm for knowledge representation and reasoning. However, disjunction is often not really needed, in that an equivalent program without disjunction can be given. In this paper, we consider the question, given a disjunctive logic program, P, under which conditions an equivalent normal (i. e., disjunction-free) logic program Q exists. In fact, we study this problem under different notions of equivalence, viz. for ordinary equivalence (considering the collections of all stable models of the programs) as well as for the more restrictive notions of strong and uniform equivalence. We resolve the issue for propositional programs on which we focus here, and present a simple, appealing semantic criterion from which all disjunctions can be eliminated under strong equivalence. Testing this criterion is coNP-complete, but the class of programs satisfying it has the same complexity as disjunctive logic programs in general. We also show that under ordinary and uniform equivalence, disjunctions can always be eliminated. In all cases, we give constructive methods to achieve this. However, we also provide evidence that disjunctive logic programs are a more succinct knowledge representation formalism than normal logic programs under all these notions of equivalence.
SAT Conference 2003 Conference Paper
Abstract The majority of the currently available solvers for quantified Boolean formulas (QBFs) process input formulas only in prenex conjunctive normal form. However, the natural representation of practicably relevant problems in terms of QBFs usually results in formulas which are not in a specific normal form. Hence, in order to evaluate such QBFs with available solvers, suitable normal-form translations are required. In this paper, we report experimental results comparing different prenexing strategies on a class of structured benchmark problems. The problems under consideration encode the evaluation of nested counterfactuals over a propositional knowledge base, and span the entire polynomial hierarchy. The results show that different prenexing strategies influence the evaluation time in different ways across different solvers. In particular, some solvers are robust to the chosen strategies while others are not.
NMR Workshop 2002 Conference Paper
Nested logic programs have recently been introduced in order to allow for arbitrarily nested formulas in the heads and the bodies of logic program rules under the answer sets semantics. Nested expressions can be formed using conjunction, disjunction, as well as the negation as failure operator in an unrestricted fashion. This provides a very flexible and compact framework for knowledge representation and reasoning. Previous results show that nested logic programs can be transformed into standard (unnested) disjunctive logic programs in an elementary way, applying the negation as failure operator to body literals only. This is of great practical relevance since it allows us to evaluate nested logic programs by means of off-the-shelf disjunctive logic programming systems, like DLV. However, it turns out that this straightforward transformation results in an exponential blow-up in the worst-case, despite the fact that complexity results indicate that there is a polynomial translation among both formalisms. In this paper, we take up this challenge and provide a polynomial translation of logic programs with nested expressions into disjunctive logic programs. Moreover, we show that this translation is modular and (strongly) faithful. We have implemented both the straightforward as well as our advanced transformation; the resulting compiler serves as a front-end to DLV and is publicly available on the Web.
JELIA Conference 2002 Conference Paper
Signed systems were introduced as a general, syntax-independent framework for paraconsistent reasoning, that is, non-trivialised reasoning from inconsistent information. In this paper, we show how the family of corresponding paraconsistent consequence relations can be axiomatised by means of quantified Boolean formulas. This approach has several benefits. First, it furnishes an axiomatic specification of paraconsistent reasoning within the framework of signed systems. Second, this axiomatisation allows us to identify upper bounds for the complexity of the different signed consequence relations. We strengthen these upper bounds by providing strict complexity results for the considered reasoning tasks. Finally, we obtain an implementation of different forms of paraconsistent reasoning by appeal to the existing system QUIP.