Arrow Research search

Author name cluster

Marc Denecker

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

42 papers
2 author rows

Possible papers

42

AAAI Conference 2024 Conference Paper

Using Symmetries to Lift Satisfiability Checking

  • Pierre Carbonnelle
  • Gottfried Schenner
  • Maurice Bruynooghe
  • Bart Bogaerts
  • Marc Denecker

We analyze how symmetries can be used to compress structures (also known as interpretations) onto a smaller domain without loss of information. This analysis suggests the possibility to solve satisfiability problems in the compressed domain for better performance. Thus, we propose a 2-step novel method: (i) the sentence to be satisfied is automatically translated into an equisatisfiable sentence over a ``lifted'' vocabulary that allows domain compression; (ii) satisfiability of the lifted sentence is checked by growing the (initially unknown) compressed domain until a satisfying structure is found. The key issue is to ensure that this satisfying structure can always be expanded into an uncompressed structure that satisfies the original sentence to be satisfied. We present an adequate translation for sentences in typed first-order logic extended with aggregates. Our experimental evaluation shows large speedups for generative configuration problems. The method also has applications in the verification of software operating on complex data structures. Our results justify further research in automatic translation of sentences for symmetry reduction.

JELIA Conference 2023 Conference Paper

Towards Systematic Treatment of Partial Functions in Knowledge Representation

  • Djordje Markovic
  • Maurice Bruynooghe
  • Marc Denecker

Abstract Partial functions are ubiquitous in Knowledge Representation applications, ranging from practical, e. g. , business applications, to more abstract, e. g. , mathematical and programming applications. Expressing propositions about partial functions may lead to non/denoting terms resulting in undefinedness errors and ambiguity, causing subtle modeling and reasoning problems. In our approach, formulas are well-defined ( true or false ) and non/ambiguous in all structures. We develop a base extension of three/valued predicate logic, in which partial function terms are guarded by domain expressions ensuring the well/definedness property despite the three/valued nature of the underlying logic. To tackle the verbosity of this core language, we propose different ways to increase convenience by using disambiguating annotations and non/commutative connectives. We show a reduction of the logic to two/valued logic of total functions and prove that many different unnesting methods turning partial functions into graph predicates, which are not equivalence preserving in general, are equivalence preserving in the proposed language, showing that ambiguity is avoided.

IJCAI Conference 2021 Conference Paper

On the Relation Between Approximation Fixpoint Theory and Justification Theory

  • Simon Marynissen
  • Bart Bogaerts
  • Marc Denecker

Approximation Fixpoint Theory (AFT) and Justification Theory (JT) are two frameworks to unify logical formalisms. AFT studies semantics in terms of fixpoints of lattice operators, and JT in terms of so-called justifications, which are explanations of why certain facts do or do not hold in a model. While the approaches differ, the frameworks were designed with similar goals in mind, namely to study the different semantics that arise in (mainly) non-monotonic logics. The First contribution of our current paper is to provide a formal link between the two frameworks. To be precise, we show that every justification frame induces an approximator and that this mapping from JT to AFT preserves all major semantics. The second contribution exploits this correspondence to extend JT with a novel class of semantics, namely ultimate semantics: we formally show that ultimate semantics can be obtained in JT by a syntactic transformation on the justification frame, essentially performing some sort of resolution on the rules.

JELIA Conference 2019 Conference Paper

Explaining Actual Causation in Terms of Possible Causal Processes

  • Marc Denecker
  • Bart Bogaerts 0001
  • Joost Vennekens

Abstract We point to several kinds of knowledge that play an important role in controversial examples of actual causation. One is knowledge about the causal mechanisms in the domain and the causal processes that result from them. Another is knowledge of what conditions trigger such mechanisms and what conditions can make them fail. We argue that to solve questions of actual causation, such knowledge needs to be made explicit. To this end, we develop a new language in the family of CP-logic, in which causal mechanisms and causal processes are formal objects. We then build a framework for actual causation in which various “production” notions of actual causation are defined. Contrary to counterfactual definitions, these notions are defined directly in terms of the (formal) causal process that causes the possible world.

IJCAI Conference 2017 Conference Paper

Safe Inductions: An Algebraic Study

  • Bart Bogaerts
  • Joost Vennekens
  • Marc Denecker

In many knowledge representation formalisms, a constructive semantics is defined based on sequential applications of rules or of a semantic operator. These constructions often share the property that rule applications must be delayed until it is safe to do so: until it is known that the condition that triggers the rule will remain to hold. This intuition occurs for instance in the well-founded semantics of logic programs and in autoepistemic logic. In this paper, we formally define the safety criterion algebraically. We study properties of so-called safe inductions and apply our theory to logic programming and autoepistemic logic. For the latter, we show that safe inductions manage to capture the intended meaning of a class of theories on which all classical constructive semantics fail.

IJCAI Conference 2016 Conference Paper

Distributed Autoepistemic Logic and its Application to Access Control

  • Pieter Van Hertum
  • Marcos Cramer
  • Bart Bogaerts
  • Marc Denecker

In this paper we define and study an extension of autoepistemic logic (AEL) called distributed autoepistemic logic (dAEL) with multiple agents that have full introspection in their own knowledge as well as in that of others. This mutual full introspection between agents is motivated by an application of dAEL in access control. We define 2- and 3-valued semantic operators for dAEL. Using these operators, approximation fixpoint theory, an abstract algebraic framework that unifies different knowledge representation formalisms, immediately yields us a family of semantics for dAEL, each based on different intuitions that are well-studied in the context of AEL. The application in access control also motivates an extension of dAEL with inductive definitions (dAEL(ID)). We explain a use-case from access control to demonstrate how dAEL(ID) can be fruitfully applied to this domain and discuss how well-suited the different semantics are for the application in access control.

SAT Conference 2016 Conference Paper

Improved Static Symmetry Breaking for SAT

  • Jo Devriendt
  • Bart Bogaerts 0001
  • Maurice Bruynooghe
  • Marc Denecker

Abstract An effective SAT preprocessing technique is the construction of symmetry breaking formulas: auxiliary clauses that guide a SAT solver away from needless exploration of symmetric subproblems. However, during the past decade, state-of-the-art SAT solvers rarely incorporated symmetry breaking. This suggests that the reduction of the search space does not outweigh the overhead incurred by detecting symmetry and constructing symmetry breaking formulas. We investigate three methods to construct more effective symmetry breaking formulas. The first method simply improves the encoding of symmetry breaking formulas. The second detects special symmetry subgroups, for which complete symmetry breaking formulas exist. The third infers binary symmetry breaking clauses for a symmetry group as a whole rather than longer clauses for individual symmetries. We implement these methods in a symmetry breaking preprocessor, and verify their effectiveness on both hand-picked problems as well as the 2014 SAT competition benchmark set. Our experiments indicate that our symmetry breaking preprocessor improves the current state-of-the-art in static symmetry breaking for SAT and has a sufficiently low overhead to improve the performance of modern SAT solvers on hard combinatorial instances.

IJCAI Conference 2016 Conference Paper

Relevance for SAT(ID)

  • Joachim Jansen
  • Bart Bogaerts
  • Jo Devriendt
  • Gerda Janssens
  • Marc Denecker

Inductive definitions and justifications are well-studied concepts. Solvers that support inductive definitions have been developed, but several of their computationally nice properties have never been exploited to improve these solvers. In this paper, we present a new notion called relevance. We determine a class of literals that are relevant for a given definition and partial interpretation, and show that choices on irrelevant atoms can never benefit the search for a model. We propose an early stopping criterion and a modification of existing heuristics that exploit relevance. We present a first implementation in MinisatID and experimentally evaluate our approach, and study how often existing solvers make choices on irrelevant atoms.

AAAI Conference 2015 Conference Paper

Grounded Fixpoints

  • Bart Bogaerts
  • Joost Vennekens
  • Marc Denecker

Algebraical fixpoint theory is an invaluable instrument for studying semantics of logics. For example, all major semantics of logic programming, autoepistemic logic, default logic and more recently, abstract argumentation have been shown to be induced by the different types of fixpoints defined in approximation fixpoint theory (AFT). In this paper, we add a new type of fixpoint to AFT: a grounded fixpoint of lattice operator O: L → L is defined as a lattice element x ∈ L such that O(x) = x and for all v ∈ L such that O(v ∧ x) ≤ v, it holds that x ≤ v. On the algebraical level, we show that all grounded fixpoints are minimal fixpoints approximated by the well-founded fixpoint and that all stable fixpoints are grounded. On the logical level, grounded fixpoints provide a new mathematically simple and compact type of semantics for any logic with a (possibly non-monotone) semantic operator. We explain the intuition underlying this semantics in the context of logic programming by pointing out that grounded fixpoints of the immediate consequence operator are interpretations that have no non-trivial unfounded sets. We also analyse the complexity of the induced semantics. Summarised, grounded fixpoint semantics is a new, probably the simplest and most compact, element in the family of semantics that capture basic intuitions and principles of various non-monotonic logics.

JAIR Journal 2015 Journal Article

Lazy Model Expansion: Interleaving Grounding with Search

  • Broes De Cat
  • Marc Denecker
  • Maurice Bruynooghe
  • Peter Stuckey

Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages like ASP and FO(.) and a CSP modeling language such as Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blow-up of the size of the theory caused by the grounding phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. Experimental results illustrate the power and generality of this approach.

IJCAI Conference 2015 Conference Paper

Partial Grounded Fixpoints

  • Bart Bogaerts
  • Joost Vennekens
  • Marc Denecker

Approximation fixpoint theory (AFT) is an algebraical study of fixpoints of lattice operators. Recently, AFT was extended with the notion of a grounded fixpoint. This type of fixpoint formalises common intuitions from various application domains of AFT, including logic programming, default logic, autoepistemic logic and abstract argumentation frameworks. The study of groundedness was limited to exact lattice points; in this paper, we extend it to the bilattice: for an approximator A of O, we define A-groundedness. We show that all partial Astable fixpoints are A-grounded and that the Awell-founded fixpoint is uniquely characterised as the least precise A-grounded fixpoint. We apply our theory to logic programming and study complexity.

ECAI Conference 2014 Conference Paper

Inference in the FO(C) Modelling Language

  • Bart Bogaerts 0001
  • Joost Vennekens
  • Marc Denecker
  • Jan Van den Bussche

Recently, FO(C), the integration of C-LOG with classical logic, was introduced as a knowledge representation language. Up to this point, no systems exist that perform inference on FO(C), and very little is known about properties of inference in FO(C). In this paper, we study both of the above problems. We define normal forms for FO(C), one of which corresponds to FO(ID). We define transformations between these normal forms, and show that, using these transformations, several inference tasks for FO(C) can be reduced to inference tasks for FO(ID), for which solvers exist. We implemented this transformation and hence, created the first system that performs inference in FO(C). We also provide results about the complexity of reasoning in FO(C).

KR Conference 2014 Conference Paper

The Well-Founded Semantics Is the Principle of Inductive Definition, revisited

  • Marc Denecker
  • Joost Vennekens

In the past, there have been several attempts to explain logic programming under the well-founded semantics as a logic of inductive definitions. A weakness in all is the absence of an obvious connection between how we understand various types of informal inductive definitions in mathematical text and the complex mathematics of the well-founded semantics. In this paper, we close this gap. We formalize the induction process in the most common principles and prove that the well-founded model construction generalizes them all. (Inductive) definitions serve to define formal objects, but they are not formal objects themselves. As such, we will refer to them as informal definitions. They are commonly phrased in natural language as collections of informal rules, possibly with an induction order. They define a set (or more than one, in case of simultaneous induction) in terms of other sets, which we will call parameters. E. g., Definition 1 has graph G as a parameter and Definition 2 the vocabulary Σ. Informal inductive definitions are broadly used in mathematics, broadly understood and, despite their informal nature, they are of mathematical precision. The set defined by it can be characterized in two quite different ways: “nonconstructively”, as the least set closed under rule application, and “constructively”, as the set obtained by iterated rule application. By Tarski’s cherished result on the least fixpoint of monotone operators, both principles coincide. It comes as a surprise to many but the last statement is only half true. Def. 2 is non-monotone due to its 4th rule, and Tarski’s result does not apply for it. There are infinitely many minimal sets that are closed under these rules and some are just weird: e. g., in some of them {P } |= ¬P holds (See Example 1). What this shows is that the constructive principle is the more fundamental of the two. An inductive definition defines a set by describing how to construct it through an induction process. This process starts from the empty set, proceeds by applying rules till the set is closed under the rules. In case of induction over a well-founded order, rules must be applied “along” the specified order. This is the intuition that we guess all of us share and that will be formalized below. The precision of informal inductive definitions makes them an ideal target for a formal empirical study. This is what we do in the next section, We define a simple formal, rule-based syntax with atomic heads and first order logic (FO) bodies. This leads to the following representation of the above informal definitions.   ∀x∀y(R(x, y) ← G(x, y)) ∀x∀y(R(x, y) ← ∃z(R(x, z) ∧ R(z, y))

KR Conference 2012 Conference Paper

Ordered Epistemic Logic

  • Hanne Vlaeminck
  • Joost Vennekens
  • Maurice Bruynooghe
  • Marc Denecker

these logics is a theory that refers to its own information content through a reflexive epistemic operator (see (Denecker, Marek, and Truszczynski 2011) for a recent account). This is a source of complexity that complicates both their semantics and their reasoning procedures. By contrast, OEL maintains a stratified representation where each level extends the knowledge of the lower levels. This simplifies the logic considerably, while still being able to handle a lot of useful applications from AEL or DL, as we will show here. Contrary to AEL, DL or ASP, an OEL theory always defines a unique belief set, represented as a set of possible worlds. We will show that OEL solves some well-known problems of ASP in the context of epistemic applications. Syntactically, OEL extends FO; the only difference with FO is that OEL is a closed domain version of FO: all possible worlds share the same domain and interpretation of terms, like many first order modal logics. With exception of this feature, OEL is a conservative extension of FO; its epistemic operator stands orthogonal to many other extensions of FO (e. g., types, inductive definitions, aggregates,...), and hence seamlessly integrates with them. Our examples here will include such extensions. By combining them, a very rich KR language is obtained in which many of the motivating examples in DL, AEL and ASP, as well as other extensions of FO such as FO(ID) (Denecker and Ternovska 2008), have a natural expression. We here extend the initial work in (Konolige 1988a; Denecker et al. 2010), in several ways. First, we prove that, in a given finite domain, the data complexity of model checking, satisfiability checking and query answering for OEL theories is in ∆P 2, which is indeed lower then for AEL and DL, where some instances of satisfiability checking problems can be proven to be ΣP 2 -complete. We also show how a model generator for OEL can be implemented. Second, we illustrate the use of OEL and of model generation in the context of a scheduling problem with an epistemic component. Third, we extend OEL to a logic for distributed epistemic agents, which we call distributed ordered epistemic logic (d-OEL). Knowledge bases are still hierarchically ordered, but now theories at one level no longer automatically possess all the knowledge of lower levels. Distributed ordered epistemic logic can cope with distributed knowledge, which makes it relevant for a number of new application areas. One example is the specification Many examples of epistemic reasoning in the literature exhibit a stratified structure: defaults are formulated on top of an incomplete knowledge base. These defaults derive extra information in case information is missing in the knowledge base. In autoepistemic logic, default logic and ASP this inherent stratification is not preserved as they may refer to their own knowledge or logical consequences. Defining the semantics of such logics requires a complex mathematical construction. As an alternative, this paper further develops ordered epistemic logic. This logic extends first order logic with a modal operator and stratification is maintained. This allows us to define an easy to understand semantics. Moreover, inference tasks have a lower complexity than in autoepistemic logic and the logic integrates seamlessly into classical logic and its extensions. In this paper we also propose a generalization of ordered epistemic logic, which we call distributed ordered epistemic logic. We argue that it can provide a semantic foundation for a number of distributed knowledge representation formalisms found in the literature.

JELIA Conference 2010 Conference Paper

An Approximative Inference Method for Solving THERE EXISTS FOR ALL SO Satisfiability Problems

  • Hanne Vlaeminck
  • Johan Wittocx
  • Joost Vennekens
  • Marc Denecker
  • Maurice Bruynooghe

Abstract The fragment ∃ ∀ SO ( ID ) of second order logic extended with inductive definitions is expressive, and many interesting problems, such as conformant planning, can be naturally expressed as finite domain satisfiability problems of this logic. Such satisfiability problems are computationally hard ( \(\Sigma^P_2\) ). In this paper, we develop an approximate, sound but incomplete method for solving such problems that transforms a ∃ ∀ SO ( ID ) to a ∃ SO ( ID ) problem. The finite domain satisfiability problem for the latter language is in NP and can be handled by several existing solvers. We show that this provides an effective method for solving practically useful problems, such as common examples of conformant planning. We also propose a more complete translation to ∃ SO ( FP ), existential SO extended with nested inductive and coinductive definitions.

JELIA Conference 2010 Conference Paper

Embracing Events in Causal Modelling: Interventions and Counterfactuals in CP-Logic

  • Joost Vennekens
  • Maurice Bruynooghe
  • Marc Denecker

Abstract This paper integrates Pearl’s seminal work on probability and causality with that of Shafer. Using the language of CP-logic, it transposes Pearl’s analysis of interventions and counterfactuals to the semantic context of Shafer’s probability trees. This gives us definitions that work not on the level of random variables, but on the level of Humean events. There are some tangible benefits to our approach: we can elegantly handle counterfactuals in the context of cyclic causal relations, and are able to consider interventions that are both more fine-grained and more elaborate than Pearl’s.

KR Conference 2008 Conference Paper

Accuracy and Efficiency of Fixpoint Methods for Approximate Query Answering in Locally Complete Databases

  • &Aacute
  • lvaro Cortés-Calabuig
  • Marc Denecker
  • Ofer Arieli
  • Maurice Bruynooghe

Standard databases convey Reiter's closed-world assumption that an atom not in the database is false. This assumption is relaxed in locally closed databases that are sound but only partially complete about their domain. One of the consequences of the weakening of the closed-world assumption is that query answering in locally closed databases is undecidable. In this paper, we develop efficient approximate methods for query answering, based on fixpoint computations, and investigate conditions that assure the optimality of these methods. Our approach of approximative reasoning may be incorporated in different contexts where incompleteness plays a major role and efficient reasoning is imperative.

KR Conference 2008 Conference Paper

Approximate Reasoning in First-Order Logic Theories

  • Johan Wittocx
  • Maarten Mariën
  • Marc Denecker

Many computational settings are concerned with finding (all) models of a first-order logic theory for a fixed, finite domain. In this paper, we present a method to compute from a given theory and finite domain an approximate structure: a structure that approximates all models. We show confluence of this method and investigate its complexity. We discuss some applications, including 3-valued query answering in integrated and partially incomplete databases, and improved grounding in the context of model expansion for first-order logic.

SAT Conference 2008 Conference Paper

SAT(ID): Satisfiability of Propositional Logic Extended with Inductive Definitions

  • Maarten Mariën
  • Johan Wittocx
  • Marc Denecker
  • Maurice Bruynooghe

Abstract We investigate the satisfiability problem, SAT(ID), of an extension of propositional logic with inductive definitions. We demonstrate how to extend existing SAT solvers to become SAT(ID) solvers, and provide an implementation on top of MiniSat. We also report on a performance study, in which our implementation exhibits the expected benefits: full use of the underlying SAT solver’s potential.

LPAR Conference 2007 Conference Paper

Integrating Inductive Definitions in SAT

  • Maarten Mariën
  • Johan Wittocx
  • Marc Denecker

Abstract We investigate techniques for supporting inductive definitions (IDs) in SAT, and report on an implementation, called MidL, of the resulting solver. This solver was first introduced in [11], as a part of a declarative problem solving framework. We go about our investigation by proposing a new formulation of the semantics of IDs as presented in [2]. This new formulation suggests a way to perform the computational task involved, resulting in an algorithm supporting IDs. We show in detail how to integrate our algorithm with traditional SAT solving techniques. We also point out the similarities with another algorithm that was recently developed for ASP [1]. Indeed, our formulation reveals a very tight relation with stable model semantics. We conclude by an experimental validation of our approach using MidL.

JELIA Conference 2006 Conference Paper

Distance-Based Repairs of Databases

  • Ofer Arieli
  • Marc Denecker
  • Maurice Bruynooghe

Abstract We introduce a general framework for repairing inconsistent databases by distance-based considerations. The uniform way of representing repairs and their semantics clarifies the essence behind various approaches to consistency restoration in database systems, helps to compare the underlying formalisms, and relates them to existing methods of defining belief revision operators, merging data sets, and integrating information systems.

LPAR Conference 2006 Conference Paper

Representation of Partial Knowledge and Query Answering in Locally Complete Databases

  • Alvaro Cortés-Calabuig
  • Marc Denecker
  • Ofer Arieli
  • Maurice Bruynooghe

Abstract The Local Closed-World Assumption (LCWA) is a generalization of Reiter’s Closed-World Assumption (CWA) for relational databases that may be incomplete. Two basic questions that are related to this assumption are: (1) how to represent the fact that only part of the information is known to be complete, and (2) how to properly reason with this information, that is: how to determine whether an answer to a database query is complete even though the database information is incomplete. In this paper we concentrate on the second issue based on a treatment of the first issue developed in earlier work of the authors. For this we consider a fixpoint semantics for declarative theories that represent locally complete databases. This semantics is based on 3-valued interpretations that allow to distinguish between the certain and possible consequences of the database’s theory.

JELIA Conference 2006 Conference Paper

Representing Causal Information About a Probabilistic Process

  • Joost Vennekens
  • Marc Denecker
  • Maurice Bruynooghe

Abstract We study causal information about probabilistic processes, i. e. , information about why events occur. A language is developed in which such information can be formally represented and we investigate when this suffices to uniquely characterize the probability distribution that results from such a process. We examine both detailed representations of temporal aspects and representations in which time is implicit. In this last case, our logic turns into a more fine-grained version of Pearl’s approach to causality. We relate our logic to certain probabilistic logic programming languages, which leads to a clearer view on the knowledge representation properties of these language. We show that our logic induces a semantics for disjunctive logic programs, in which these represent non-deterministic processes. We show that logic programs under the well-founded semantics can be seen as a language of deterministic causality, which we relate to McCain & Turner’s causal theories.

LPAR Conference 2005 Conference Paper

Satisfiability Checking for PC(ID)

  • Maarten Mariën
  • Rudradeb Mitra
  • Marc Denecker
  • Maurice Bruynooghe

Abstract The logic FO(ID) extends classical first order logic with inductive definitions. This paper studies the satisifiability problem for PC(ID), its propositional fragment. We develop a framework for model generation in this logic, present an algorithm and prove its correctness. As FO(ID) is an integration of classical logic and logic programming, our algorithm integrates techniques from SAT and ASP. We report on a prototype system, called M id L, experimentally validating our approach.

KR Conference 2004 Conference Paper

Inductive Situation Calculus

  • Marc Denecker
  • Eugenia Ternovska

Temporal reasoning has always been a major test case for knowledge representation formalisms. In this paper, we develop an inductive variant of the situation calculus using the Logic for Non-Monotone Inductive Definitions (NMID). This logic has been proposed recently and is an extension of classical logic. It allows for a uniform represention of various forms of definitions, including monotone inductive definitions and non-monotone forms of inductive definitions such as iterated induction and induction over well-founded posets. In the NMID-axiomatisation of the situation calculus, fluents and causality predicates are defined by simultaneous induction on the well-founded poset of situations. The inductive approach allows us to solve the ramification problem for the situation calculus in a uniform and modular way. Our solution is among the most general solutions for the ramification problem in the situation calculus. Using previously developed modularity techniques, we show that the basic variant of the inductive situation calculus without ramification rules is equivalent to Reiter-style situation calculus.

JELIA Conference 2004 Conference Paper

On the Relation Between ID-Logic and Answer Set Programming

  • Maarten Mariën
  • David Gilis
  • Marc Denecker

Abstract This paper is an analysis of two knowledge representation extensions of logic programming, namely Answer Set Programming and ID-Logic. Our aim is to compare both logics on the level of declarative reading, practical methodology and formal semantics. At the level of methodology, we put forward the thesis that in many (but not all) existing applications of ASP, an ASP program is used to encode definitions and assertions, similar as in ID-Logic. We illustrate this thesis with an example and present a formal result that supports it, namely an equivalence preserving translation from a class of ID-Logic theories into ASP. This translation can be exploited also to use the current efficient ASP solvers to reason on ID-Logic theories and it has been used to implement a model generator for ID-Logic.

NMR Workshop 2004 Conference Paper

Splitting an operator: an algebraic modularity result and its application to auto-epistemic logic

  • Joost Vennekens
  • David Gilis
  • Marc Denecker

It is well known that it is possible to split certain autoepistemic theories under the semantics of expansions, i. e. to divide such a theory into a number of different “levels”, such that the models of the entire theory can be constructed by incrementally constructing models for each level. Similar results exist for other non-monotonic formalisms, such as logic programming and default logic. In this work, we present a general, algebraic theory of splitting under a £xpoint semantics. Together with the framework of approximation theory, a general £xpoint theory for arbitrary operators, this gives us a uniform and powerful way of deriving splitting results for each logic with a £xpoint semantics. We demonstrate the usefulness of this approach, by applying our results to auto-epistemic logic.

KR Conference 2004 Conference Paper

What's in a model? Epistemological analysis of Logic Programming

  • Marc Denecker

It is commonly believed that the meaning of a formal declarative knowledge representation language is determined by its formal semantics. This is not quite so. This paper shows an epistemological ambiguity that arises in the context of logic programming. Several different logic programming formalisms and semantics have been proposed. Hence, logic programming can be seen as an overlapping family of formal logics, each induced by a pair of a formal syntax and a formal semantics. We would expect that (a) each such pair has a unique declarative reading and (b) for a program in the intersection of several formal LP logics with the same formal semantics in each of them, its declarative reading is the same in each of them. I show in this paper that neither (a) nor (b) holds. The paper investigates the causes and the consequences of this phenomenon and points out some directions to overcome the ambiguity.

NMR Workshop 2002 Conference Paper

Compositionally results for stratified nonmonotone operators

  • David Gilis
  • Marc Denecker

Approximation theory is an extension of Tarski’s least fixpoint theory of monotone lattice operators to the generalised case of arbitrary nonmonotone operators. Approximation theory captures intuitions and principles found in several nonmonotonic reasoning systems and describes all main 2-, 3- and 4-valued semantics of logic programming, default logic and autoepistemic logic. This paper investigates a composition property of lattice operators and their fixpoints. We investigate the circumstances in which an operator defined on a product lattice can be split up in operators on the component lattices, and show how its various fixpoints can be obtained by composing the appropriate fixpoints of the component operators. In the context of a specific nonmonotonic logic, our theorem property translates into compositionality theorems.

NMR Workshop 2002 Conference Paper

Unifying abductive logic programming and stable logic programming

  • Marc Denecker

Abductive Logic Programming (ALP) and Answer Set Programming (ASP) were developed independently and are defined in terms of quite different foundations and concepts. Yet, the existence of formally proven relationships and the often striking congruence between application programs in both formalisms suggest that, at a deeper level, both formalisms might be tighly related. In this talk, I will take an epistemological viewpoint to investigate and uncover this relationship. Epistemological foundations of both formalisms can been defined in terms of two nonmonotonic reasoning principles, in particular autoepistemic/default reasoning on the one hand and iterated inductive definitions on the other hand. Both principles are fundamentally different, even contrary nonmonotonic principles, one being a minimising knowledge principle while the second being a mazimising knowledge principle. In the talk, I will argue that Answer Set Programming with two negations is an instance of the first principle whereas Abductive Logic Programming and Stable Logic Programming are instances of the second principle. Stable logic programming is thus presented as a fundamentally different logic than Answer Set Programming with two negations. This contradicts with the standard view in which Stable Logic Programming is viewed as a subformalism of ASP. I will back up my position with formal results showing that stable logic programming and ASP attribute fundamentally different modalities to the negation as failure operator and the rule operator. On the other hand, this work unifies Stable Logic Programming and Abductive Logic Programming as syntactic variants of the same underlying logic. The impact on methodology will be discussed. At the computational level, Abductive Logic Programming and Stable Logic Programming provide different computational approaches for essentially the same type of reasoning problem. During his invited talk at NMR2000, Victor Marek complained about the lack of relationship and the confused position of logic programming in the broader area of mathematical and philosophical logic. In this talk, a big step is made in clarifying the picture. At the same time, the talk addresses a number of fundamental logical ambiguities and long standing open problems about the meaning of language constructs such as negation as failure and rule operator in logic programming extensions.

LPAR Conference 2001 Conference Paper

Coherent Composition of Distributed Knowledge-Bases Through Abduction

  • Ofer Arieli
  • Bert Van Nuffelen
  • Marc Denecker
  • Maurice Bruynooghe

Abstract We introduce an abductive method for coherent composition of distributed data. Our approach is based on an abductive inference procedure that is applied on a meta-theory that relates different, possibly inconsistent, input databases. Repairs of the integrated data are computed, resultingin a consistent output database that satisfies the meta-theory. Our framework is based on the A -system, which is an abductive system that implements SLDNFA-resolution. The outcome is a robust application that, to the best of our knowledge, is more expressive (thus more general) than any other existing application for coherent data integration.

LPAR Conference 2000 Conference Paper

Logic Programming Approaches for Representing and Solving Constraint Satisfaction Problems: A Comparison

  • Nikolay Pelov
  • Emmanuel De Mot
  • Marc Denecker

Abstract Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there is constraint logic programming which computes a solution as an answer substitution to a query containing the variables of the constraint satisfaction problem. On the other hand there are systems based on stable model semantics, abductive systems, and first order logic model generators which compute solutions as models of some theory. This paper compares these different approaches from the point of view of knowledge representation (how declarative are the programs) and from the point of view of performance (how good are they at solving typical problems).

AAAI Conference 1998 Conference Paper

Fixpoint 3-Valued Semantics for Autoepistemic Logic

  • Marc Denecker
  • Victor Marek

The paper presents a constructive 3-valued semantics for autoepistemic logic (AEL). Weintroduce a derivation operator and define the semantics as its least fixpoint. The semantics is 3-valued in the sense that, for someformulas, the least fixpoint does not specify whether they are believed or not. Weshowthat complete fixpoints of the derivation operator correspond to Moore’s stable expansions. In the case of modal representations of logic programsour least fixpoint semantics expresses well-founded semantics or 3-valued Fitting-Kunen semantics (depending on the embedding used). Weshow that, compurationally, our semantics is simpler than the semantics proposed by Moore (assuming that the polynomial hierarchy does not collapse).