Arrow Research search

Author name cluster

Kewen Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

36 papers
1 author row

Possible papers

36

AAAI Conference 2025 Conference Paper

Rule-Guided Graph Neural Networks for Explainable Knowledge Graph Reasoning

  • Zhe Wang
  • Suxue Ma
  • Kewen Wang
  • Zhiqiang Zhuang

The connections between symbolic rules and neural networks have been explored in various directions, including rule mining through neural networks and rule-based explanation for neural networks. These approaches allow symbolic rules to be extracted from neural network models, which offers explainability to the models. However, the plausibility of the extracted rules is rarely analysed. In this paper, we show that the confidence degrees of extracted rules are generally not high, and we propose a new family of Graph Neural Networks that can be trained with the guidance of rules. Hence, the inference of our model simulates the rule reasoning. Moreover, rules with high confidence degrees can be extracted from the trained model that aligns with the inference of the model, which verifies the effectiveness of the rule guidance. Experimental evaluation of knowledge graph reasoning tasks further demonstrates the effectiveness of our model.

AAMAS Conference 2024 Conference Paper

Maximising the Influence of Temporary Participants in Opinion Formation

  • Zhiqiang Zhuang
  • Kewen Wang
  • Zhe Wang
  • Junhu Wang
  • Yinong Yang

DeGroot-style opinion formation presumes a continuous interaction among agents of a social network. Hence, it cannot handle agents external to the social network that interact only temporarily with the permanent ones. Many real-world organisations and individuals fall into such a category. For instance, a company tries to persuade as many as possible to buy its products and, due to various constraints, can only exert its influence for a limited amount of time. We propose a variant of the DeGroot model that allows an external agent to interact with the permanent ones for a preset period of time. We obtain several insights on maximising an external agent’s influence in opinion formation by analysing and simulating the variant.

AAMAS Conference 2023 Conference Paper

Price of Anarchy for First Price Auction with Risk-Averse Bidders

  • Zhiqiang Zhuang
  • Kewen Wang
  • Zhe Wang

Inquiry into the price of anarchy (POA) for auctions is almost confined within the risk-neutral setting. Nonetheless, empirical and experimental studies suggest that real-world agents are more or less risk-averse rather than strictly risk-neutral. In this paper, we study the POA of first-price single-item auctions (FPA) with risk-averse bidders. For completeness, we consider both risk-averse and risk-neutral sellers. In the former, we establish that the POA is 1/2 for both the symmetric FPA and FPA in general. In the latter, we show that the POA can be arbitrarily bad for the symmetric FPA and characterise the conditions for the POA to be constant. In response to a fairness issue in the case of risk-neutral sellers, we propose the notion of suboptimal social welfare. We subsequently derive POA bounds with respect to this new notion where the bounds are parameterised by two variables that capture the value range of the utility function.

IJCAI Conference 2022 Conference Paper

Function-words Adaptively Enhanced Attention Networks for Few-Shot Inverse Relation Classification

  • Chunliu Dou
  • Shaojuan Wu
  • Xiaowang Zhang
  • Zhiyong Feng
  • Kewen Wang

The relation classification is to identify semantic relations between two entities in a given text. While existing models perform well for classifying inverse relations with large datasets, their performance is significantly reduced for few-shot learning. In this paper, we propose a function words adaptively enhanced attention framework (FAEA) for few-shot inverse relation classification, in which a hybrid attention model is designed to attend class-related function words based on meta-learning. As the involvement of function words brings in significant intra-class redundancy, an adaptive message passing mechanism is introduced to capture and transfer inter-class differences. We mathematically analyze the negative impact of function words from dot-product measurement, which explains why the message passing mechanism effectively reduces the impact. Our experimental results show that FAEA outperforms strong baselines, especially the inverse relation accuracy is improved by 14. 33% under 1-shot setting in FewRel1. 0.

KR Conference 2022 Conference Paper

Learning Typed Rules over Knowledge Graphs

  • Hong Wu
  • Zhe Wang
  • Kewen Wang
  • Yi-Dong Shen

Rule learning from large datasets has regained extensive interest as rules are useful for developing explainable approaches to many applications in knowledge graphs. However, existing methods for rule learning are still limited in terms of scalability and rule quality. This paper presents a new method for learning typed rules by employing entity class information. Our experimental evaluation shows the superiority of our system compared to state-of-the-art rule learners. In particular, we demonstrate the usefulness of typed rules in reasoning for link prediction.

AAAI Conference 2020 Conference Paper

On the Expressivity of ASK Queries in SPARQL

  • Xiaowang Zhang
  • Jan Van den Bussche
  • Kewen Wang
  • Heng Zhang
  • Xuanxing Yang
  • Zhiyong Feng

As a major query type in SPARQL, ASK queries are boolean queries and have found applications in several domains such as semantic SPARQL optimization. This paper is a first systematic study of the relative expressive power of various fragments of ASK queries in SPARQL. Among many new results, a surprising one is that the operator UNION is redundant for ASK queries. The results in this paper as a whole paint a rich picture for the expressivity of fragments of ASK queries with the four basic operators of SPARQL 1. 0 possibly together with a negation. The work in this paper provides a guideline for future SPARQL query optimization and implementation.

IJCAI Conference 2020 Conference Paper

Query Answering for Existential Rules via Efficient Datalog Rewriting

  • Zhe Wang
  • Peng Xiao
  • Kewen Wang
  • Zhiqiang Zhuang
  • Hai Wan

Existential rules are an expressive ontology formalism for ontology-mediated query answering and thus query answering is of high complexity, while several tractable fragments have been identified. Existing systems based on first-order rewriting methods can lead to queries too large for DBMS to handle. It is shown that datalog rewriting can result in more compact queries, yet previously proposed datalog rewriting methods are mostly inefficient for implementation. In this paper, we fill the gap by proposing an efficient datalog rewriting approach for answering conjunctive queries over existential rules, and identify and combine existing fragments of existential rules for which our rewriting method terminates. We implemented a prototype system Drewer, and experiments show that it is able to handle a wide range of benchmarks in the literature. Moreover, Drewer shows superior or comparable performance over state-of-the-art systems on both the compactness of rewriting and the efficiency of query answering.

JAIR Journal 2019 Journal Article

A Generalisation of AGM Contraction and Revision to Fragments of First-Order Logic

  • Zhiqiang Zhuang
  • Zhe Wang
  • Kewen Wang
  • James Delgrande

AGM contraction and revision assume an underlying logic that contains propositional logic. Consequently, this assumption excludes many useful logics such as the Horn fragment of propositional logic and most description logics. Our goal in this paper is to generalise AGM contraction and revision to (near-)arbitrary fragments of classical first-order logic. To this end, we first define a very general logic that captures these fragments. In so doing, we make the modest assumptions that a logic contains conjunction and that information is expressed by closed formulas or sentences. The resulting logic is called first-order conjunctive logic or FC logic for short. We then take as the point of departure the AGM approach of constructing contraction functions through epistemic entrenchment, that is the entrenchment-based contraction. We redefine entrenchment-based contraction in ways that apply to any FC logic, which we call FC contraction. We prove a representation theorem showing its compliance with all the AGM contraction postulates except for the controversial recovery postulate. We also give methods for constructing revision functions through epistemic entrenchment which we call FC revision; which also apply to any FC logic. We show that if the underlying FC logic contains tautologies then FC revision complies with all the AGM revision postulates. Finally, in the context of FC logic, we provide three methods for generating revision functions via a variant of the Levi Identity, which we call contraction, withdrawal and cut generated revision, and explore the notion of revision equivalence. We show that withdrawal and cut generated revision coincide with FC revision and so does contraction generated revision under a finiteness condition.

AAAI Conference 2019 Conference Paper

Disjunctive Normal Form for Multi-Agent Modal Logics Based on Logical Separability

  • Liangda Fang
  • Kewen Wang
  • Zhe Wang
  • Ximing Wen

Modal logics are primary formalisms for multi-agent systems but major reasoning tasks in such logics are intractable, which impedes applications of multi-agent modal logics such as automatic planning. One technique of tackling the intractability is to identify a fragment called a normal form of multiagent logics such that it is expressive but tractable for reasoning tasks such as entailment checking, bounded conjunction transformation and forgetting. For instance, DNF of propositional logic is tractable for these reasoning tasks. In this paper, we first introduce a notion of logical separability and then define a novel disjunctive normal form SDNF for the multiagent logic Kn, which overcomes some shortcomings of existing approaches. In particular, we show that every modal formula in Kn can be equivalently casted as a formula in SDNF, major reasoning tasks tractable in propositional DNF are also tractable in SDNF, and moreover, formulas in SDNF enjoy the property of logical separability. To demonstrate the usefulness of our approach, we apply SDNF in multi-agent epistemic planning. Finally, we extend these results to three more complex multi-agent logics Dn, K45n and KD45n.

AAAI Conference 2018 Conference Paper

Forgetting and Unfolding for Existential Rules

  • Zhe Wang
  • Kewen Wang
  • Xiaowang Zhang

Existential rules, a family of expressive ontology languages, inherit desired expressive and reasoning properties from both description logics and logic programming. On the other hand, forgetting is a well studied operation for ontology reuse, obfuscation and analysis. Yet it is challenging to establish a theory of forgetting for existential rules. In this paper, we lay the foundation for a theory of forgetting for existential rules by developing a novel notion of unfolding. In particular, we introduce a definition of forgetting for existential rules in terms of query answering and provide a characterisation of forgetting by the unfolding. A result of forgetting may not be expressible in existential rules, and we then capture the expressibility of forgetting by a variant of boundedness. While the expressibility is undecidable in general, we identify a decidable fragment. Finally, we provide an algorithm for forgetting in this fragment.

AAAI Conference 2018 Conference Paper

On the Satisfiability Problem of Patterns in SPARQL 1.1

  • Xiaowang Zhang
  • Jan Van den Bussche
  • Kewen Wang
  • Zhe Wang

The pattern satisfiability is a fundamental problem for SPARQL. This paper provides a complete analysis of decidability/undecidability of satisfiability problems for SPARQL 1. 1 patterns. A surprising result is the undecidability of satis- fiability for SPARQL 1. 1 patterns when only AND and MI- NUS are expressible. Also, it is shown that any fragment of SPARQL 1. 1 without expressing both AND and MINUS is decidable. These results provide a guideline for future SPARQL query language design and implementation.

IJCAI Conference 2018 Conference Paper

Scalable Rule Learning via Learning Representation

  • Pouya Ghiasnezhad Omran
  • Kewen Wang
  • Zhe Wang

We study the problem of learning first-order rules from large Knowledge Graphs (KGs). With recent advancement in information extraction, vast data repositories in the KG format have been obtained such as Freebase and YAGO. However, traditional techniques for rule learning are not scalable for KGs. This paper presents a new approach RLvLR to learning rules from KGs by using the technique of embedding in representation learning together with a new sampling method. Experimental results show that our system outperforms some state-of-the-art systems. Specifically, for massive KGs with hundreds of predicates and over 10M facts, RLvLR is much faster and can learn much more quality rules than major systems for rule learning in KGs such as AMIE+. We also used the RLvLR-mined rules in an inference module to carry out the link prediction task. In this task, RLvLR outperformed Neural LP, a state-of-the-art link prediction system, in both runtime and accuracy.

JAIR Journal 2016 Journal Article

DL-Lite Contraction and Revision

  • Zhiqiang Zhuang
  • Zhe Wang
  • Kewen Wang
  • Guilin Qi

Two essential tasks in managing description logic knowledge bases are eliminating problematic axioms and incorporating newly formed ones. Such elimination and incorporation are formalised as the operations of contraction and revision in belief change. In this paper, we deal with contraction and revision for the DL-Lite family through a model-theoretic approach. Standard description logic semantics yields an infinite number of models for DL-Lite knowledge bases, thus it is difficult to develop algorithms for contraction and revision that involve DL models. The key to our approach is the introduction of an alternative semantics called type semantics which can replace the standard semantics in characterising the standard inference tasks of DL-Lite. Type semantics has several advantages over the standard one. It is more succinct and importantly, with a finite signature, the semantics always yields a finite number of models. We then define model-based contraction and revision functions for DL-Lite knowledge bases under type semantics and provide representation theorems for them. Finally, the finiteness and succinctness of type semantics allow us to develop tractable algorithms for instantiating the functions.

IJCAI Conference 2016 Conference Paper

Eliminating Disjunctions in Answer Set Programming by Restricted Unfolding

  • Jianmin Ji
  • Hai Wan
  • Kewen Wang
  • Zhe Wang
  • Chuhan ZHANG
  • Jiangtao Xu

A disjunctive logic program under the answer set semantics can be equivalently translated to a normal logic program by the shifting transformation, if the program is head-cycle-free. In this paper, we provide an answer-set-preserving rewriting of a general disjunctive program to a normal program by first applying the unfolding transformation on atoms that prevent the program from being head-cycle-free, then shifting the resulting program. Different from other transformations that eliminate disjunctions in answer set programming, the new rewriting is efficient for "almost" head-cycle-free programs, i. e. , programs that have only a few atoms that prevent them to be head-cycle-free. Based on the new rewriting, we provide an anytime algorithm to compute answer sets of a disjunctive program by calling solvers for normal logic programs. The experiment shows that the algorithm is efficient for some disjunctive programs. We also extend the rewriting to non-ground answer set programs on finite structures.

AAAI Conference 2015 Conference Paper

A Syntax-Independent Approach to Forgetting in Disjunctive Logic Programs

  • James Delgrande
  • Kewen Wang

In this paper, we present an approach to forgetting in disjunctive logic programs, where forgetting an atom from a program amounts to a reduction in the signature of that program. Notably, the approach is syntax-independent, so that if two programs are strongly equivalent, then the result of forgetting a given atom in each program is also strongly equivalent. Our central definition of forgetting is abstract: forgetting an atom from program P is characterised by the set of those SE consequences of P that do not mention the atom to be forgotten. We provide an equivalent, syntactic, characterization in which forgetting an atom p is given by those rules in the program that do not mention p, together with rules obtained by a single inference step from those rules that do mention p. Forgetting is shown to have appropriate properties; in particular, answer sets are preserved in forgetting an atom. As well, forgetting an atom via the syntactic characterization results in a modest (at worst quadratic) blowup in the program size. Finally, we provide a prototype implementation of this approach to forgetting.

AAAI Conference 2015 Conference Paper

Approximating Model-Based ABox Revision in DL-Lite: Theory and Practice

  • Guilin Qi
  • Zhe Wang
  • Kewen Wang
  • Xuefeng Fu
  • Zhiqiang Zhuang

Model-based approaches provide a semantically well justified way to revise ontologies. However, in general, model-based revision operators are limited due to lack of efficient algorithms and inexpressibility of the revision results. In this paper, we make both theoretical and practical contribution to efficient computation of model-based revisions in DL-Lite. Specifically, we show that maximal approximations of two well-known model-based revisions for DL-LiteR can be computed using a syntactic algorithm. However, such a coincidence of model-based and syntactic approaches does not hold when role functionality axioms are allowed. As a result, we identify conditions that guarantee such a coincidence for DL-LiteFR. Our result shows that both model-based and syntactic revisions can co-exist seamlessly and the advantages of both approaches can be taken in one revision operator. Based on our theoretical results, we develop a graph-based algorithm for the revision operators and thus graph database techniques can be used to compute ontology revisions. Preliminary evaluation results show that the graph-based algorithm can efficiently handle revision of practical ontologies with large data.

IJCAI Conference 2015 Conference Paper

Extending AGM Contraction to Arbitrary Logics

  • Zhiqiang Zhuang
  • Zhe Wang
  • Kewen Wang
  • James P Delgrande

Classic entrenchment-based contraction is not applicable to many useful logics, such as description logics. This is because the semantic construction refers to arbitrary disjunctions of formulas, while many logics do not fully support disjunction. In this paper, we present a new entrenchment-based contraction which does not rely on any logical connectives except conjunction. This contraction is applicable to all fragments of first-order logic that support conjunction. We provide a representation theorem for the contraction which shows that it satisfies all the AGM postulates except for the controversial Recovery Postulate, and is a natural generalisation of entrenchment-based contraction.

AAAI Conference 2015 Conference Paper

Instance-Driven Ontology Evolution in DL-Lite

  • Zhe Wang
  • Kewen Wang
  • Zhiqiang Zhuang
  • Guilin Qi

The development and maintenance of large and complex ontologies are often time-consuming and error-prone. Thus, automated ontology learning and evolution have attracted intensive research interest. In data-centric applications where ontologies are designed from the data or automatically learnt from it, when new data instances are added that contradict the ontology, it is often desirable to incrementally revise the ontology according to the added data. In description logics, this problem can be intuitively formulated as the operation of TBox contraction, i. e. , rational elimination of certain axioms from the logical consequences of a TBox, and it is w. r. t. an ABox. In this paper we introduce a model-theoretic approach to such a contraction problem by using an alternative semantic characterisation of DL-Lite TBoxes. We show that entailment checking (without necessarily first computing the contraction result) is in coNP, which does not shift the corresponding complexity in propositional logic, and the problem is tractable when the size of the new data is bounded.

AAAI Conference 2015 Conference Paper

Knowledge Forgetting in Circumscription: A Preliminary Report

  • Yisong Wang
  • Kewen Wang
  • Zhe Wang
  • Zhiqiang Zhuang

The theory of (variable) forgetting has received significant attention in nonmonotonic reasoning, especially, in answer set programming. However, the problem of establishing a theory of forgetting for some expressive nonmonotonic logics such as McCarthy’s circumscription is rarely explored. In this paper a theory of forgetting for propositional circumscription is proposed, which is not a straightforward adaption of existing approaches. In particular, some properties that are essential for existing proposals do not hold any longer or have to be reformulated. Several useful properties of the new forgetting are proved, which demonstrate suitability of the forgetting for circumscription. A sound and complete algorithm for the forgetting is developed and an analysis of computational complexity is given.

AAAI Conference 2015 Conference Paper

Partial Meet Revision and Contraction in Logic Programs

  • Sebastian Binnewies
  • Zhiqiang Zhuang
  • Kewen Wang

The recent years have seen several proposals aimed at placing the revision of logic programs within the belief change frameworks established for classical logic. A crucial challenge of this task lies in the nonmonotonicity of standard logic programming semantics. Existing approaches have thus used the monotonic characterisation via SE-models to develop semantic revision operators, which however neglect any syntactic information, or reverted to a syntax-oriented belief base approach altogether. In this paper, we bridge the gap between semantic and syntactic techniques by adapting the idea of a partial meet construction from classical belief change. This type of construction allows us to define new model-based operators for revising as well as contracting logic programs that preserve the syntactic structure of the programs involved. We demonstrate the rationality of our operators by testing them against the classic AGM or alternative belief change postulates adapted to the logic programming setting. We further present an algorithm that reduces the partial meet revision or contraction of a logic program to performing revision or contraction only on the relevant subsets of that program.

AAAI Conference 2015 Conference Paper

Towards Tractable and Practical ABox Abduction over Inconsistent Description Logic Ontologies

  • Jianfeng Du
  • Kewen Wang
  • Yi-Dong Shen

ABox abduction plays an important role in reasoning over description logic (DL) ontologies. However, it does not work with inconsistent DL ontologies. To tackle this problem while achieving tractability, we generalize ABox abduction from the classical semantics to an inconsistency-tolerant semantics, namely the Intersection ABox Repair (IAR) semantics, and propose the notion of IAR-explanations in inconsistent DL ontologies. We show that computing all minimal IARexplanations is tractable in data complexity for first-order rewritable ontologies. However, the computational method may still not be practical due to a possibly large number of minimal IAR-explanations. Hence we propose to use preference information to reduce the number of explanations to be computed. In particular, based on the specificity of explanations, we introduce the notion of ⊆cps-cminimal IARexplanations, which can be computed in a highly efficient way. Accordingly, we propose a tractable level-wise method for computing all ⊆cps-cminimal IAR-explanations in a firstorder rewritable ontology. Experimental results on benchmarks of inconsistent ontologies show that the proposed method scales to tens of millions of assertions and can be of practical use.

AAAI Conference 2014 Conference Paper

A Tractable Approach to ABox Abduction over Description Logic Ontologies

  • Jianfeng Du
  • Kewen Wang
  • Yi-Dong Shen

ABox abduction is an important reasoning mechanism for description logic ontologies. It computes all minimal explanations (sets of ABox assertions) whose appending to a consistent ontology enforces the entailment of an observation while keeps the ontology consistent. We focus on practical computation for a general problem of ABox abduction, called the query abduction problem, where an observation is a Boolean conjunctive query and the explanations may contain fresh individuals neither in the ontology nor in the observation. However, in this problem there can be infinitely many minimal explanations. Hence we first identify a class of TBoxes called first-order rewritable TBoxes. It guarantees the existence of finitely many minimal explanations and is sufficient for many ontology applications. To reduce the number of explanations that need to be computed, we introduce a special kind of minimal explanations called representative explanations from which all minimal explanations can be retrieved. We develop a tractable method (in data complexity) for computing all representative explanations in a consistent ontology. Experimental results demonstrate that the method is efficient and scalable for ontologies with large ABoxes.

AAAI Conference 2014 Conference Paper

Contraction and Revision over DL-Lite TBoxes

  • Zhiqiang Zhuang
  • Zhe Wang
  • Kewen Wang
  • Guilin Qi

Two essential tasks in managing Description Logic (DL) ontologies are eliminating problematic axioms and incorporating newly formed axioms. Such elimination and incorporation are formalised as the operations of contraction and revision in belief change. In this paper, we deal with contraction and revision for the DL-Lite family through a model-theoretic approach. Standard DL semantics yields infinite numbers of models for DL-Lite TBoxes, thus it is not practical to develop algorithms for contraction and revision that involve DL models. The key to our approach is the introduction of an alternative semantics called type semantics which is more succinct than DL semantics. More importantly, with a finite signature, type semantics always yields finite humber of models. We then define model-based contraction and revision for DL-Lite TBoxes under type semantics and provide representation theorems for them. Finally, the succinctness of type semantics allows us to develop tractable algorithms for both operations.

IJCAI Conference 2013 Conference Paper

Forgetting for Answer Set Programs Revisited

  • Yisong Wang
  • Kewen Wang
  • Mingyi Zhang

A new semantic forgetting for answer set programs (ASP), called SM-forgetting, is proposed in the paper. It distinguishes itself from the others in that it preserves not only skeptical and credulous consequences on unforgotten variables, but also strong equivalence – forgetting same variables in strongly equivalent logic programs has strongly equivalent results. The forgetting presents a positive answer to Gabbay, Pearce and Valverde’s open question – if ASP has uniform interpolation property. We also investigate some properties, algorithm and computational complexities for the forgetting. It shows that computing the forgetting result is generally intractable even for Horn logic programs.

AAAI Conference 2012 Conference Paper

Conflict-Based Belief Revision Operators in Possibilistic Logic

  • Guilin Qi
  • Kewen Wang

In this paper, we investigate belief revision in possibilistic logic, which is a weighted logic proposed to deal with incomplete and uncertain information. Existing revision operators in possibilistic logic are restricted in the sense that the input information can only be a formula instead of a possibilistic knowledge base which is a set of weighted formulas. To break this restriction, we consider weighted prime implicants of a possibilistic knowledge base and use them to define novel revision operators in possibilistic logic. Intuitively, a weighted prime implicant of a possibilistic knowledge base is a logically weakest possibilistic term (i. e. , a set of weighted literals) that can entail the knowledge base. We first show that the existing definition of a weighted prime implicant is problematic and need a modification. To define a revision operator using weighted prime implicants, we face two problems. The first problem is that we need to define the notion of a conflict set between two weighted prime implicants of two possibilistic knowledge bases to achieve minimal change. The second problem is that we need to define the disjunction of possibilistic terms. We solve these problems and define two conflict-based revision operators in possibilistic logic. We then adapt the well-known postulates for revision proposed by Katsuno and Mendelzon and show that our revision operators satisfy four of the basic adapted postulates and satisfy two others in some special cases.

AAAI Conference 2012 Conference Paper

FLP Semantics Without Circular Justifications for General Logic Programs

  • Yi-Dong Shen
  • Kewen Wang

The FLP semantics presented by (Faber, Leone, and Pfeifer 2004) has been widely used to define answer sets, called FLP answer sets, for different types of logic programs such as logic programs with aggregates, description logic programs (dl-programs), Hex programs, and logic programs with first-order formulas (general logic programs). However, it was recently observed that the FLP semantics may produce unintuitive answer sets with circular justifications caused by self-supporting loops. In this paper, we address the circular justification problem for general logic programs by enhancing the FLP semantics with a level mapping formalism. In particular, we extend the Gelfond-Lifschitz three step definition of the standard answer set semantics from normal logic programs to general logic programs and define for general logic programs the first FLP semantics that is free of circular justifications. We call this FLP semantics the well-justified FLP semantics. This method naturally extends to general logic programs with additional constraints like aggregates, thus providing a unifying framework for defining the well-justified FLP semantics for various types of logic programs. When this method is applied to normal logic programs with aggregates, the well-justified FLP semantics agrees with the conditional satisfaction based semantics defined by (Son, Pontelli, and Tu 2007); and when applied to dlprograms, the semantics agrees with the strongly wellsupported semantics defined by (Shen 2011).

AAAI Conference 2010 Conference Paper

A New Approach to Knowledge Base Revision in DL-Lite

  • Zhe Wang
  • Kewen Wang
  • Rodney Topor

Revising knowledge bases (KBs) in description logics (DLs) in a syntax-independent manner is an important, nontrivial problem for the ontology management and DL communities. Several attempts have been made to adapt classical modelbased belief revision and update techniques to DLs, but they are restricted in several ways. In particular, they do not provide operators or algorithms for general DL KB revision. The key difficulty is that, unlike propositional logic, a DL KB may have infinitely many models with complex (and possibly infinite) structures, making it difficult to define and compute revisions in terms of models. In this paper, we study general KBs in a specific DL in the DL-Lite family. We introduce the concept of features for such KBs, develop an alternative semantic characterization of KBs using features (instead of models), define two specific revision operators for KBs, and present the first algorithm for computing best approximations for syntax-independent revisions of KBs.

KR Conference 2010 Conference Paper

Revising General Knowledge Bases in Description Logics

  • Zhe Wang
  • Kewen Wang
  • Rodney Topor

This paper introduces a new methodology of revising general KBs in DL-Lite. Two specific revision operators are defined, their properties are investigated and algorithms for computing revisions are developed.

AAAI Conference 2005 Conference Paper

A Theory of Forgetting in Logic Programming

  • Kewen Wang

The study of forgetting for reasoning has attracted considerable attention in AI. However, much of the work on forgetting, and other related approaches such as independence, irrelevance and novelty, has been restricted to the classical logics. This paper describes a detailed theoretical investigation of the notion of forgetting in the context of logic programming. We first provide a semantic definition of forgetting under the answer sets for extended logic programs. We then discuss the desirable properties and some motivating examples. An important result of this study is an algorithm for computing the result of forgetting in a logic program. Furthermore, we present a modified version of the algorithm and show that the time complexity of the new algorithm is polynomial with respect to the size of the given logic program if the size of certain rules is fixed. We show how the proposed theory of forgetting can be used to characterize the logic program updates.

AAAI Conference 2005 Conference Paper

Observation-based Model for BDI-Agents

  • Kaile Su
  • Kewen Wang

We present a new computational model of BDI-agents, called the observation-based BDI-model. The key point of this BDImodel is to express agents’ beliefs, desires and intentions as a set of runs (computing paths), which is exactly a system in the interpreted system model, a well-known agent model due to Halpern and his colleagues. Our BDI-model is computationally grounded in that we are able to associate the BDIagent model with a computer program, and formulas, involving agents’ beliefs, desires (goals) and intentions, can be understood as properties of program computations. We present a sound and complete proof system with respect to our BDImodel and explore how symbolic model checking techniques can be applied to model checking BDI-agents. In order to make our BDI-model more flexible and practically realistic, we generalize it so that agents can have multiple sources of beliefs, goals and intentions.