Arrow Research search

Author name cluster

Henri Prade

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

102 papers
2 author rows

Possible papers

102

IJCAI Conference 2025 Conference Paper

40 Years of Research in Possibilistic Logic – a Survey

  • Didier Dubois
  • Henri Prade

Possibilistic logic is forty years old. Possibilistic logic is a logic that handles classical logic formulas associated with weights taking values in a linearly ordered set or more generally in a lattice. Over the decades, possibilistic logic has undergone numerous developments at both theoretical and applied levels. The ambition of this article is to review all these developments while exposing the main ideas behind them.

ECAI Conference 2024 Conference Paper

A Novel View of Analogical Proportion Between Formulas

  • Andreas Herzig
  • Emiliano Lorini
  • Henri Prade

Analogical proportions are statements of the form “α is to β as γ is to δ”, noted α: β: :γ: δ, and can be understood as “α differs from β as γ differs from δ” and conversely “β differs from α as δ differs from γ”. In this paper, α, β, γ, δ are supposed to be propositional logic formulas, which are appropriate for representing concepts. There exists one approach, developed over the last 15 years, where “α differs from β” is understood in terms of the negation of the material implication α → β. The paper investigates another view where “α differs from β” is interpreted in terms of transformations where some variables become false, some variables become true, and some variables become irrelevant. Both approaches satisfy the three basic postulates of analogical proportions (reflexivity, symmetry, and stability under central permutation), as well as other interesting properties such as transitivity and unicity of δ such that α: β: :γ: δ. However, the two approaches depart from each other since they do not validate the same analogical proportions. In particular, when p, q, r are atoms the proportion p: (p∧r): :q: (q∧r) holds in the new approach, while it fails to do so for the other. The new approach exhibits also a good behaviour with respect to integrity constraints. It is advocated that this makes it appropriate for handling analogy between concepts, while the other approach has proved to be fruitful for Boolean features-based representations. The paper provides a thorough analysis of the differences between the two approaches.

ECAI Conference 2024 Conference Paper

Analogical Classifier as a Surrogate for Explanations

  • Suryani Lim
  • Henri Prade
  • Gilles Richard

This paper offers a unified, model-agnostic, analogy-based framework for extracting relevant attributes and building contrastive explanations for a given classification. Analogical reasoning relies on the idea of comparing items by pairs to discern their similarities and differences, based on an observable sample S of data. First, adhering to this principle, we propose a pair-based approach to extract relevant attributes via an analogical relevance index (ARI) from Boolean or nominal datasets. We prove properties establishing the behavior of ARI with respect to available samples S. When comparing ARI against established methods such as χ2, mutual information, Shapley, or Sobol indices, empirical evidence highlights the appropriateness of ARI for synthetic datasets, where the classes are generated using specific Boolean functions. ARI also demonstrates robust performance across a diverse range of functions commonly encountered in real datasets. However, some synthetic datasets created from certain functions pose challenges for all indices. Second, we use analogical classification as a surrogate for explanation provision. Analogical classifiers are rooted on analogical proportions, involving the comparison of two pairs of items (a, b) and (c, d) through statements like “a differs from b as c differs from d”, denoted as a: b: :c: d. To explain the classification of an item d, we search for an item c close to d but in a different class, forming a pair (c, d). Then, the analogical classifier extracts pairs (a, b) satisfying a: b: :c: d leveraging the relevant features identified earlier. Finally we explain why d belongs to a class distinct from that of c by computing the frequency of pairs (a, b), mirroring the difference between c and d, that lead to the same class change. Standard local methods typically capitalize on examples within the item’s neighborhood for explanation. In contrast, our approach operates beyond the neighborhood, broadening the scope of explanatory power. Experiments on categorical synthetic and real datasets indicate that the proposed framework provides plausible and essentially human-understandable explanations.

IJCAI Conference 2021 Conference Paper

Analogical Proportions: Why They Are Useful in AI

  • Henri Prade
  • Gilles Richard

This paper presents a survey of researches in analogical reasoning whose building block are analogical proportions which are statements of the form “a is to b as c is to d”. They have been developed in the last twenty years within an Artificial Intelligence perspective. After discussing their formal modeling with the associated inference mechanism, the paper reports the main results obtained in various AI domains ranging from computational linguistics to classification, including image processing, I. Q. tests, case based reasoning, preference learning, and formal concepts analysis. The last section discusses some new theoretical concerns, and the potential of analogical proportions in other areas such as argumentation, transfer learning, and XAI.

IJCAI Conference 2020 Conference Paper

Analogy Between Concepts (Extended Abstract)

  • Nelly Barbot
  • Laurent Miclet
  • Henri Prade

Analogical proportions are statements of the form “x is to y as z is to t”, where x, y, z, t are items of the same nature, or not. In this paper, we more particularly consider “relational proportions” of the form “object A has the same relationship with attribute a as object B with attribute b”. We provide a formal definition for relational proportions, and investigate how they can be extracted from a formal context, in the setting of formal concept analysis.

KR Conference 2020 Conference Paper

Jokes and Belief Revision

  • Florence Dupin de Saint-Cyr
  • Henri Prade

The paper deals with a topic little studied in artificial intelligence: the understanding of humor. In this preliminary study, we try to identify the basic mechanism at work in quips and narrative jokes. It seems that in many cases a belief revision process is operating, leading to an unexpected conclusion, through the punchline of the jest. We propose a formal modeling of jokes based on belief revision. Namely the punchline, which triggers a revision, is both surprising and explains perfectly what was reported in the beginning of the joke. This also suggests a way of ranking jokes in terms of surprise and strength of explanation, using possibilistic logic.

JELIA Conference 2019 Invited Paper

Possibilistic Logic: From Certainty-Qualified Statements to Two-Tiered Logics - A Prospective Survey

  • Didier Dubois
  • Henri Prade

Abstract Possibilistic logic (PL) is more than thirty years old. The paper proposes a survey of its main developments and applications in artificial intelligence, together with a short presentation of works in progress. PL amounts to a classical logic handling of certainty-qualified statements. Certainty is estimated in the setting of possibility theory as a lower bound of a necessity set-function. An elementary possibilistic formula is a pair made of a classical logic formula, and a certainty level belonging to a bounded scale. Basic PL handles only conjunctions of such formulas, and PL bases can be viewed as classical logic bases layered in terms of certainty. Semantics is in terms of epistemic states represented by fuzzy sets of interpretations. A PL base is associated with an inconsistency level above which formulas are safe from inconsistency. Applications include reasoning with default rules, belief revision, Bayesian possibilistic networks, information fusion, and preference modeling (in this latter case, certainty is turned into priority). Different extensions of basic PL are briefly reviewed, where levels take values in lattices, are replaced by vectors of levels, or are handled in a purely symbolic manner (without being instantiated). This latter extension may be of interest for explanation purposes. A paraconsistent treatment of inconsistency is also discussed. Still another extension allows for associating possibilistic formulas with sets of agents or sources that support them. In generalized possibilistic logic (GPL), negation and disjunction can be applied as well as conjunction, to possibilistic formulas. It may be viewed as a fragment of modal logic (such as KD45) where modalities cannot be nested. GPL can be still extended to a logic involving both objective and non-nested multimodal formulas. Applications of GPL to the modeling of ignorance, to the representation of answer set programs, to reasoning about other agents’ beliefs, and to a logic of argumentation are outlined. Generally speaking, the interest and the strength of PL relies on a sound alliance between classical logic and possibility theory which offers a rich representation setting allowing an accurate modeling of partial ignorance. The paper focuses more on ideas than on technicalities and provides references for details (Invited talk presented by the second author).

IJCAI Conference 2018 Conference Paper

Behavior of Analogical Inference w. r. t. Boolean Functions

  • Miguel Couceiro
  • Nicolas Hug
  • Henri Prade
  • Gilles Richard

It has been observed that a particular form of analogical inference, based on analogical proportions, yields competitive results in classification tasks. Using the algebraic normal form of Boolean functions, it has been shown that analogical prediction is always exact iff the labeling function is affine. We point out that affine functions are also meaningful when using another view of analogy. We address the accuracy of analogical inference for arbitrary Boolean functions and show that if a function is epsilon-close to an affine function, then the probability of making a wrong prediction is upper bounded by 4 epsilon. This result is confirmed by an empirical study showing that the upper bound is tight. It highlights the specificity of analogical inference, also characterized in terms of the Hamming distance.

IJCAI Conference 2017 Conference Paper

Analogy-preserving functions: A way to extend Boolean samples

  • Miguel Couceiro
  • Nicolas Hug
  • Henri Prade
  • Gilles Richard

Training set extension is an important issue in machine learning. Indeed when the examples at hand are in a limited quantity, the performances of standard classifiers may significantly decrease and it can be helpful to build additional examples. In this paper, we consider the use of analogical reasoning, and more particularly of analogical proportions for extending training sets. Here the ground truth labels are considered to be given by a (partially known) function. We examine the conditions that are required for such functions to ensure an error-free extension in a Boolean setting. To this end, we introduce the notion of Analogy Preserving (AP) functions, and we prove that their class is the class of affine Boolean functions. This noteworthy theoretical result is complemented with an empirical investigation of approximate AP functions, which suggests that they remain suitable for training set extension.

AIJ Journal 2017 Journal Article

Generalized possibilistic logic: Foundations and applications to qualitative reasoning about uncertainty

  • Didier Dubois
  • Henri Prade
  • Steven Schockaert

This paper introduces generalized possibilistic logic (GPL), a logic for epistemic reasoning based on possibility theory. Formulas in GPL correspond to propositional combinations of assertions such as “it is certain to degree λ that the propositional formula α is true”. As its name suggests, the logic generalizes possibilistic logic (PL), which at the syntactic level only allows conjunctions of the aforementioned type of assertions. At the semantic level, PL can only encode sets of epistemic states encompassed by a single least informed one, whereas GPL can encode any set of epistemic states. This feature makes GPL particularly suitable for reasoning about what an agent knows about the beliefs of another agent, e. g. , allowing the former to draw conclusions about what the other agent does not know. We introduce an axiomatization for GPL and show its soundness and completeness w. r. t. possibilistic semantics. Subsequently, we highlight the usefulness of GPL as a powerful unifying framework for various knowledge representation formalisms. Among others, we show how comparative uncertainty and ignorance can be modelled in GPL. We also exhibit a close connection between GPL and various existing formalisms, including possibilistic logic with partially ordered formulas, a logic of conditional assertions in the style of Kraus, Lehmann and Magidor, answer set programming and a fragment of the logic of minimal belief and negation as failure. Finally, we analyse the computational complexity of reasoning in GPL, identifying decision problems at the first, second, third and fourth level of the polynomial hierarchy.

ECAI Conference 2016 Conference Paper

Analogical Classifiers: A Theoretical Perspective

  • Nicolas Hug
  • Henri Prade
  • Gilles Richard
  • Mathieu Serrurier

In recent works, analogy-based classifiers have been proved quite successful. They exhibit good accuracy rates when compared with standard classification methods. Nevertheless, a theoretical study of their predictive power has not been done so far. One of the main barriers has been the lack of functional definition: analogical learners have only algorithmic definitions. The aim of our paper is to complement the empirical studies with a theoretical perspective. Using a simplified framework, we first provide a concise functional definition of the output of an analogical learner. Two versions of the definition are considered, a strict and a relaxed one. As far as we know, this is the first definition of this kind for analogical learner. Then, taking inspiration from results in k-NN studies, we examine some analytic properties such as convergence and VC-dimension, which are among the basic markers in terms of machine learning expressiveness. We then look at what could be expected in terms of theoretical accuracy from such a learner, in a Boolean setting. We examine learning curves for artificial domains, providing experimental results that illustrate our formulas, and empirically validate our functional definition of analogical classifiers.

ECAI Conference 2016 Conference Paper

Not Being at Odds with a Class: A New Way of Exploiting Neighbors for Classification

  • Myriam Bounhas
  • Henri Prade
  • Gilles Richard

Classification can be viewed as a matter of associating a new item with the class where it is the least at odds w. r. t. the other elements. A recently proposed oddness index applied to pairs or triples (rather than larger subsets of elements in a class), when summed up over all such subsets, provides an accurate estimate of a global oddness of an item w. r. t. a class. Rather than considering all pairs in a class, one can only deal with pairs containing one of the nearest neighbors of the item in the target class. Taking a step further, we choose the second element in the pair as another nearest neighbor in the class. The oddness w. r. t. a class computed on the basis of pairs made of two nearest neighbors leads to low complexity classifiers, still competitive in terms of accuracy w. r. t. classical approaches.

ECAI Conference 2016 Conference Paper

Preference Modeling with Possibilistic Networks and Symbolic Weights: A Theoretical Study

  • Nahla Ben Amor
  • Didier Dubois
  • Héla Gouider
  • Henri Prade

The use of possibilistic networks for representing conditional preference statements on discrete variables has been proposed only recently. The approach uses non-instantiated possibility weights to define conditional preference tables. Moreover, additional information about the relative strengths of these symbolic weights can be taken into account. The fact that at best we have some information about the relative values of these weights acknowledges the qualitative nature of preference specification. These conditional preference tables give birth to vectors of symbolic weights that reflect the preferences that are satisfied and those that are violated in a considered situation. The comparison of such vectors may rely on different orderings: the ones induced by the product-based, or the minimum-based chain rule underlying the possibilistic network, the discrimin, or leximin refinements of the minimum-based ordering, as well as Pareto ordering, and the symmetric Pareto ordering that refines it. A thorough study of the relations between these orderings in presence of vector components that are symbolic rather numerical is presented. In particular, we establish that the product-based ordering and the symmetric Pareto ordering coincide in presence of constraints comparing pairs of symbolic weights. This ordering agrees in the Boolean case with the inclusion between the sets of preference statements that are violated. The symmetric Pareto ordering may be itself refined by the leximin ordering. The paper highlights the merits of product-based possibilistic networks for representing preferences and provides a comparative discussion with CP-nets and OCF-networks.

EAAI Journal 2015 Journal Article

Engineering multiuser museum interactives for shared cultural experiences

  • Roberto Confalonieri
  • Matthew Yee-King
  • Katina Hazelden
  • Mark d׳Inverno
  • Dave de Jonge
  • Nardine Osman
  • Carles Sierra
  • Leila Agmoud

Multiuser museum interactives are computer systems installed in museums or galleries which allow several visitors to interact together with digital representations of artefacts and information from the museum׳s collection. In this paper, we describe WeCurate, a socio-technical system that supports co-browsing across multiple devices and enables groups of users to collaboratively curate a collection of images, through negotiation, collective decision making and voting. The engineering of such a system is challenging since it requires to address several problems such as: distributed workflow control, collective decision making and multiuser synchronous interactions. The system uses a peer-to-peer Electronic Institution (EI) to manage and execute a distributed curation workflow and models community interactions into scenes, where users engage in different social activities. Social interactions are enacted by intelligent agents that interface the users participating in the curation workflow with the EI infrastructure. The multiagent system supports collective decision making, representing the actions of the users within the EI, where the agents advocate and support the desires of their users e. g. aggregating opinions for deciding which images are interesting enough to be discussed, and proposing interactions and resolutions between disagreeing group members. Throughout the paper, we describe the enabling technologies of WeCurate, the peer-to-peer EI infrastructure, the agent collective decision making capabilities and the multi-modal interface. We present a system evaluation based on data collected from cultural exhibitions in which WeCurate was used as supporting multiuser interactive.

ICML Conference 2015 Conference Paper

Entropy evaluation based on confidence intervals of frequency estimates: Application to the learning of decision trees

  • Mathieu Serrurier
  • Henri Prade

Entropy gain is widely used for learning decision trees. However, as we go deeper downward the tree, the examples become rarer and the faithfulness of entropy decreases. Thus, misleading choices and over-fitting may occur and the tree has to be adjusted by using an early-stop criterion or post pruning algorithms. However, these methods still depends on the choices previously made, which may be unsatisfactory. We propose a new cumulative entropy function based on confidence intervals on frequency estimates that together considers the entropy of the probability distribution and the uncertainty around the estimation of its parameters. This function takes advantage of the ability of a possibility distribution to upper bound a family of probabilities previously estimated from a limited set of examples and of the link between possibilistic specificity order and entropy. The proposed measure has several advantages over the classical one. It performs significant choices of split and provides a statistically relevant stopping criterion that allows the learning of trees whose size is well-suited w. r. t. the available data. On the top of that, it also provides a reasonable estimator of the performances of a decision tree. Finally, we show that it can be used for designing a simple and efficient online learning algorithm.

IJCAI Conference 2015 Conference Paper

The Cube of Opposition: A Structure Underlying Many Knowledge Representation Formalisms

  • Didier Dubois
  • Henri Prade
  • Agn
  • egrave; s Rico

The square of opposition is a structure involving two involutive negations and relating quantified statements, invented in Aristotle time. Rediscovered in the second half of the XXth century, and advocated as being of interest for understanding conceptual structures and solving problems in paraconsistent logics, the square of opposition has been recently completed into a cube, which corresponds to the introduction of a third negation. Such a cube can be encountered in very different knowledge representation formalisms, such as modal logic, possibility theory in its all-or-nothing version, formal concept analysis, rough set theory and abstract argumentation. After restating these results in a unified perspective, the paper proposes a graded extension of the cube and shows that several qualitative, as well as quantitative formalisms, such as Sugeno integrals used in multiple criteria aggregation and qualitative decision theory, or yet belief functions and Choquet integrals, are amenable to transformations that form graded cubes of opposition. This discovery leads to a new perspective on many knowledge representation formalisms, laying bare their underlying common features. The cube of opposition exhibits fruitful parallelisms between different formalisms, which leads to highlight some missing components present in one formalism and currently absent from another.

ECAI Conference 2014 Conference Paper

Analogical classification: A new way to deal with examples

  • Myriam Bounhas
  • Henri Prade
  • Gilles Richard

Introduced a few years ago, analogy-based classification methods are a noticeable addition to the set of lazy learning techniques. They provide amazing results (in terms of accuracy) on many classical datasets. They look for all triples of examples in the training set that are in analogical proportion with the item to be classified on a maximal number of attributes and for which the corresponding analogical proportion equation on the class has a solution. In this paper when classifying a new item, we demonstrate a new approach where we focus on a small part of the triples available. To restrict the scope of the search, we first look for examples that are as similar as possible to the new item to be classified. We then only consider the pairs of examples presenting the same dissimilarity as between the new item and one of its closest neighbors. Thus we implicitly build triples that are in analogical proportion on all attributes with the new item. Then the classification is made on the basis of a majority vote on the pairs leading to a solvable class equation. This new algorithm provides results as good as other analogical classifiers with a lower average complexity.

ECAI Conference 2014 Conference Paper

From analogical proportions in lattices to proportional analogies in formal concepts

  • Laurent Miclet
  • Nelly Barbot
  • Henri Prade

The paper provides an attempt at bridging formal concept analysis and the modeling of analogical proportions (i. e. , statements of the form "a is to b as c is to d"). A suitable definition for analogical proportions in non distributive lattices is proposed and then applied to concept lattices. This enables us to compute what we call proportional analogies that establish analogies on a proportional basis between pairs (a, b) and (c, d) when a and c belong to a domain and b and d to another domain (as in "Moby Dick is to Herman Melville as Alice in Wonderland is to Lewis Carroll").

JELIA Conference 2014 Conference Paper

Logical Foundations of Possibilistic Keys

  • Henning Köhler
  • Uwe Leck
  • Sebastian Link
  • Henri Prade

Abstract Possibility theory is applied to introduce and reason about the fundamental notion of a key for uncertain data. Uncertainty is modeled qualitatively by assigning to tuples of data a degree of possibility with which they occur in a relation, and assigning to keys a degree of certainty which says to which tuples the key applies. The associated implication problem is characterized axiomatically and algorithmically. It is shown how sets of possibilistic keys can be visualized as possibilistic Armstrong relations, and how they can be discovered from given possibilistic relations. It is also shown how possibilistic keys can be used to clean dirty data by revising the belief in possibility degrees of tuples.

ECAI Conference 2014 Conference Paper

Reasoning about Uncertainty and Explicit Ignorance in Generalized Possibilistic Logic

  • Didier Dubois
  • Henri Prade
  • Steven Schockaert

Generalized possibilistic logic (GPL) is a logic for reasoning about the revealed beliefs of another agent. It is a two-tier propositional logic, in which propositional formulas are encapsulated by modal operators that are interpreted in terms of uncertainty measures from possibility theory. Models of a GPL theory represent weighted epistemic states and are encoded as possibility distributions. One of the main features of GPL is that it allows us to explicitly reason about the ignorance of another agent. In this paper, we study two types of approaches for reasoning about ignorance in GPL, based on the idea of minimal specificity and on the notion of guaranteed possibility, respectively. We show how these approaches naturally lead to different flavours of the language of GPL and a number of decision problems, whose complexity ranges from the first to the third level of the polynomial hierarchy.

ECAI Conference 2014 Conference Paper

Some Elements for a Prehistory of Artificial Intelligence in the Last Four Centuries

  • Pierre Marquis
  • Odile Papini
  • Henri Prade

Artificial intelligence (AI) was not born ex nihilo in the mid-fifties of the XXthcentury. Beyond its immediate roots in cybernetics and in computer science that started about two decades before, its emergence is the result of a long and slow process in the history of humanity. This can be articulated around two main questions: the formalization of reasoning and the design of machines having autonomous capabilities in terms of computation and action. The aim of this paper is to gather some insufficiently known elements about the prehistory of AI in the last 350 years that precede the official birth of AI, a time period where only a few very well-known names, such as Thomas Bayes and Georges Boole, are usually mentioned in relation with AI.

AIJ Journal 2013 Journal Article

Interpolative and extrapolative reasoning in propositional theories using qualitative knowledge about conceptual spaces

  • Steven Schockaert
  • Henri Prade

Many logical theories are incomplete, in the sense that non-trivial conclusions about particular situations cannot be derived from them using classical deduction. In this paper, we show how the ideas of interpolation and extrapolation, which are of crucial importance in many numerical domains, can be applied in symbolic settings to alleviate this issue in the case of propositional categorization rules. Our method is based on (mainly) qualitative descriptions of how different properties are conceptually related, where we identify conceptual relations between properties with spatial relations between regions in Gärdenfors conceptual spaces. The approach is centred around the view that categorization rules can often be seen as approximations of linear (or at least monotonic) mappings between conceptual spaces. We use this assumption to justify that whenever the antecedents of a number of rules stand in a relationship that is invariant under linear (or monotonic) transformations, their consequents should also stand in that relationship. A form of interpolative and extrapolative reasoning can then be obtained by applying this idea to the relations of betweenness and parallelism respectively. After discussing these ideas at the semantic level, we introduce a number of inference rules to characterize interpolative and extrapolative reasoning at the syntactic level, and show their soundness and completeness w. r. t. the proposed semantics. Finally, we show that the considered inference problems are PSPACE-hard in general, while implementations in polynomial time are possible under some relatively mild assumptions.

IJCAI Conference 2013 Conference Paper

Interpolative Reasoning with Default Rules

  • Steven Schockaert
  • Henri Prade

Default reasoning and interpolation are two important forms of commonsense rule-based reasoning. The former allows us to draw conclusions from incompletely specified states, by making assumptions on normality, whereas the latter allows us to draw conclusions from states that are not explicitly covered by any of the available rules. Although both approaches have received considerable attention in the literature, it is at present not well understood how they can be combined to draw reasonable conclusions from incompletely specified states and incomplete rule bases. In this paper, we introduce an inference system for interpolating default rules, based on a geometric semantics in which normality is related to spatial density and interpolation is related to geometric betweenness. We view default rules and information on the betweenness of natural categories as particular types of constraints on qualitative representations of Gärdenfors conceptual spaces. We propose an axiomatization, extending the well-known System P, and show its soundness and completeness w. r. t. the proposed semantics. Subsequently, we explore how our extension of preferential reasoning can be further refined by adapting two classical approaches for handling the irrelevance problem in default reasoning: rational closure and conditional entailment.

ECAI Conference 2012 Conference Paper

Decision-making with Sugeno integrals: DMU vs. MCDM

  • Miguel Couceiro
  • Didier Dubois
  • Henri Prade
  • Tamás Waldhauser

This paper clarifies the connection between multiple criteria decision-making and decision under uncertainty in a qualitative setting relying on a finite value scale. While their mathematical formulations are very similar, the underlying assumptions differ and the latter problem turns out to be a special case of the former. Sugeno integrals are very general aggregation operations that can represent preference relations between uncertain acts or between multifactorial alternatives where attributes share the same totally ordered domain. This paper proposes a generalized form of the Sugeno integral that can cope with attributes which have distinct domains via the use of qualitative utility functions. In the case of decision under uncertainty, this model corresponds to state-dependent preferences on act consequences. Axiomatizations of the corresponding preference functionals are proposed in the cases where uncertainty is represented by possibility measures, by necessity measures, and by general monotonic set-functions, respectively. This is achieved by weakening previously proposed axiom systems for Sugeno integrals.

KR Conference 2012 Conference Paper

Homogeneous logical proportions: Their unicity and their role in similarity-based prediction

  • Henri Prade
  • Gilles Richard

Given a 4-tuple of Boolean variables (a, b, c, d), logical proportions are modeled by a pair of equivalences relating similarity indicators (a∧b and a∧b), or dissimilarity indicators (a ∧ b and a ∧ b) pertaining to the pair (a, b), to the ones associated with the pair (c, d). Logical proportions are homogeneous when they are based on equivalences between indicators of the same kind. There are only 4 such homogeneous proportions, which respectively express that i) “a differs from b as c differs from d” (and “b differs from a as d differs from c”), ii) “a differs from b as d differs from c” (and “b differs from a as c differs from d”), iii) “what a and b have in common c and d have it also”, iv) “what a and b have in common neither c nor d have it”. We prove that each of these proportions is the unique Boolean formula (up to equivalence) that satisfies groups of remarkable properties including a stability property w. r. t. a specific permutation of the terms of the proportion. The first one (i) is shown to be the only one to satisfy the standard postulates of an analogical proportion. The paper also studies how two analogical proportions can be combined into a new one. We then examine how homogeneous proportions can be used for diverse prediction tasks. We particularly focus on the completion of analogical-like series, and on missing value abduction problems. Finally, the paper compares our approach with other existing works on qualitative prediction based on ideas of betweenness, or of matrix abduction.

KR Conference 2012 Conference Paper

Stable models in generalized possibilistic logic

  • Didier Dubois
  • Henri Prade
  • Steven Schockaert

An important aspect of possibilistic logic is that models correspond to epistemic states, rather than to propositional interpretations, which forms a natural basis for epistemic reasoning. However, possibilistic logic only takes sets of formulas of the form (α, λ) into account, which we could interpret as conjunctions of assertions of the form N (α) ≥ λ. In some applications, on the other hand, we may want to link such assertions using different propositional connectives. In logic programming, for instance, a (negation-free) rule such as α → β intuitively means that whenever α is known to be true, we should accept β to be true as well. This could be expressed using necessity measures and material implication as N (α) ≥ 1 ⇒ N (β) ≥ 1. This implication, however, cannot be expressed in possibilistic logic, an observation which stands in stark contrast to the expressivity of modal logics for epistemic reasoning. In (Banerjee and Dubois 2009), a so-called Meta-Epistemic Logic (MEL) was introduced as a first step to bridge this gap, in the form of a simple modal logic with a semantics in terms of Boolean possibility distributions (i. e. possibility distributions π such that π(ω) ∈ {0, 1} for every ω ∈ Ω). Essentially, MEL is a fragment of the modal logic KD, in which neither the nesting of modalities nor the occurrence of nonmodal propositional formulas is allowed. Recently, generalized possibilistic logic (GPL) was introduced as a graded version of MEL (Dubois, Prade, and Schockaert 2011; Dubois and Prade 2011), developing an original proposal of (Dubois and Prade 2007). Possibilistic logic is a well-known logic for reasoning under uncertainty, which is based on the idea that the epistemic state of an agent can be modeled by assigning to each possible world a degree of possibility, taken from a totally ordered, but essentially qualitative scale. Recently, a generalization has been proposed that extends possibilistic logic to a meta-epistemic logic, endowing it with the capability of reasoning about epistemic states, rather than merely constraining them. In this paper, we further develop this generalized possibilistic logic (GPL). We introduce an axiomatization showing that GPL is a fragment of a graded version of the modal logic KD, and we prove soundness and completeness w. r. t. a semantics in terms of possibility distributions. Next, we reveal a close link between the wellknown stable model semantics for logic programming and the notion of minimally specific models in GPL. More generally, we analyze the relationship between the equilibrium logic of Pearce and GPL, showing that GPL can essentially be seen as a generalization of equilibrium logic, although its notion of minimal specificity is slightly more demanding than the notion of minimality underlying equilibrium logic.

ECAI Conference 2012 Conference Paper

When intelligence is just a matter of copying

  • William Fernando Correa
  • Henri Prade
  • Gilles Richard

The paper investigates a general approach that allows us to solve IQ tests based on Raven's progressive matrices automatically. The 3 × 3 Raven matrices exhibit 8 geometric pictures displayed in 8 cells of the matrix, which have to be logically completed with a 9th picture to be put in the remaining empty matrix cell. In these tests, a set of candidate pictures for the solution is also given. The suggested approach is based on a logical view of analogical proportions (i. e. , statements of the form "A is to B as C is to D"). The reading of Raven matrices in terms of such proportions can be applied to a feature-based description of the pictures, but also, in a number of cases, to a very low level representation, i. e. , the pixel level. It appears that the analogical proportion reading just amounts here to a recopy of patterns of feature values that already appear in the data (after checking that there is no conflicting patterns). Implementing this principle, the proposed algorithm computes the 9th picture, without the help of any set of candidate solutions, but only on the basis of the 8 known cells of the Raven matrices. A comparison with other approaches is provided, and we emphasize the generality of the approach which is able to provide a simple and uniform mechanism applicable in many situations.

AIJ Journal 2011 Journal Article

Preferences in AI: An overview

  • Carmel Domshlak
  • Eyke Hüllermeier
  • Souhila Kaci
  • Henri Prade

This editorial of the special issue “Representing, Processing, and Learning Preferences: Theoretical and Practical Challenges” surveys past and ongoing research on preferences in AI, including references and pointers to the literature. It covers approaches to representation, reasoning and learning of preferences. Methods in AI are contrasted with those in related areas, such as operations research and databases. Finally, we also give a brief introduction to the contents of the special issue.

AIJ Journal 2011 Journal Article

Solving conflicts in information merging by a flexible interpretation of atomic propositions

  • Steven Schockaert
  • Henri Prade

Although many techniques for merging conflicting propositional knowledge bases have already been proposed, most existing work is based on the idea that inconsistency results from the presence of incorrect pieces of information, which should be identified and removed. In contrast, we take the view in this paper that conflicts are often caused by statements that are inaccurate rather than completely false, suggesting to restore consistency by interpreting certain statements in a flexible way, rather than ignoring them completely. In accordance with this view, we propose a novel approach to merging which exploits extra-logical background information about the semantic relatedness of atomic propositions. Several merging operators are presented, which are based on different formalizations of this background knowledge, ranging from purely qualitative approaches, related to possibilistic logic, to quantitative approaches with a probabilistic flavor. Both syntactic and semantic characterizations are provided for each merging operator, and the computational complexity is analyzed.

AAAI Conference 2010 Conference Paper

An Inconsistency-Tolerant Approach to Information Merging Based on Proposition Relaxation

  • Steven Schockaert
  • Henri Prade

Inconsistencies between different information sources may arise because of statements that are inaccurate, albeit not completely false. In such scenarios, the most natural way to restore consistency is often to interpret assertions in a more flexible way, i. e. to enlarge (or relax) their meaning. As this process inherently requires extra-logical information about the meaning of atoms, extensions of classical merging operators are needed. In this paper, we introduce syntactic merging operators, based on possibilistic logic, which employ background knowledge about the similarity of atomic propositions to appropriately relax propositional statements.

KR Conference 2010 Conference Paper

Reasoning with logical proportions

  • Henri Prade
  • Gilles Richard

new case, “Socrate” for instance, and again deduce “Socrate By logical proportion, we mean a statement that expresses a semantical equivalence between two pairs of propositions. In these pairs, each element is compared to the other in terms of similarities and/or dissimilarities. An example of such a proportion is the well known analogical proportion: a is to b as c is to d. Analogical proportions have been recently characterized in logical terms, but there are many other proportions that are worth of interest. Some of them can be related to the analogical pattern, others are related to semantical equivalence between conditional objects and express statements such as a ressembles to b and differs from b in the same way as c with respect to d. We show that there are 5 direct proportions, including the analogical one and 4 others having a conditional object flavor, where the change (if any) from a to b goes in the same direction as the change from c to d (if any), together with 5 reverse proportions obtained by switching c and d. Moreover, there exists only one auto-reverse proportion called paralogy and stating that what a and b have in common, c and d have it as well. It is then established that there is none other proportion than these ones (with the exception of 4 degenerated ones) that satisfies a natural “full identity” requirement. The paper proposes a structured and unified view of these logical proportions and discusses their characteristic properties. It extends previous works where only proportions related to analogy were considered. It also explores the use of these logical proportions in transduction-like inference, where new items are classified on the basis of already classified items without trying to induce a generic model, considering similarities and differences between items only. Taking advantage of different proportions, a transduction procedure is proposed.

AIJ Journal 2009 Journal Article

Using arguments for making and explaining decisions

  • Leila Amgoud
  • Henri Prade

Arguments play two different roles in day life decisions, as well as in the discussion of more crucial issues. Namely, they help to select one or several alternatives, or to explain and justify an already adopted choice. This paper proposes the first general and abstract argument-based framework for decision making. This framework follows two main steps. At the first step, arguments for beliefs and arguments for options are built and evaluated using classical acceptability semantics. At the second step, pairs of options are compared using decision principles. Decision principles are based on the accepted arguments supporting the options. Three classes of decision principles are distinguished: unipolar, bipolar or non-polar principles depending on whether i) only arguments pros or only arguments cons, or ii) both types, or iii) an aggregation of them into a meta-argument are used. The abstract model is then instantiated by expressing formally the mental states (beliefs and preferences) of a decision maker. In the proposed framework, information is given in the form of a stratified set of beliefs. The bipolar nature of preferences is emphasized by making an explicit distinction between prioritized goals to be pursued, and prioritized rejections that are stumbling blocks to be avoided. A typology that identifies four types of argument is proposed. Indeed, each decision is supported by arguments emphasizing its positive consequences in terms of goals certainly satisfied and rejections certainly avoided. A decision can also be attacked by arguments emphasizing its negative consequences in terms of certainly missed goals, or rejections certainly led to by that decision. Finally, this paper articulates the optimistic and pessimistic decision criteria defined in qualitative decision making under uncertainty, in terms of an argumentation process. Similarly, different decision principles identified in multiple criteria decision making are restated in our argumentation-based framework.

ECAI Conference 2008 Conference Paper

Mastering the Processing of Preferences by Using Symbolic Priorities in Possibilistic Logic

  • Souhila Kaci
  • Henri Prade

The paper proposes a new approach to the handling of preferences expressed in a compact way under the form of conditional statements. These conditional statements are translated into classical logic formulas associated with symbolic levels. Ranking two alternatives then leads to compare their respective amount of violation with respect to the set of formulas expressing the preferences. These symbolic violation amounts, which can be computed in a possibilistic logic manner, can be partially ordered lexicographically once put in a vector form. This approach is compared to the ceteris paribus-based CP-net approach, which is the main existing artificial intelligence approach to the compact processing of preferences. It is shown that the partial order obtained with the CP-net approach fully agrees with the one obtained with the proposed approach, but generally includes further strict preferences between alternatives (considered as being not comparable by the symbolic level logic-based approach). These additional strict preferences are in fact debatable, since they are not the reflection of explicit user's preferences but the result of the application of the ceteris paribus principle that implicitly, and quite arbitrarily, favors father node preferences in the graphical structure associated with conditional preferences. Adding constraints between symbolic levels for expressing that the violation of father nodes is less allowed than the one of children nodes, it is shown that it is possible to recover the CP-net-induced partial order. Due to existing results in possibilistic logic with symbolic levels, the proposed approach is computationally tractable. Key words: preference, priority, partial order, CP-net, possibilistic logic.

IJCAI Conference 2007 Conference Paper

  • Romain G
  • eacute; rard
  • Souhila Kaci
  • Henri Prade

The paper presents and discusses a method for rank-ordering alternatives on the basis of constraints induced by generic principles (expressing for instance the relative importance of criteria), or by examples of orderings between particular alternatives, without resorting to the use of an aggregation operation for evaluating the alternatives. The approach, which remains qualitative, is based on the minimal specificity principle of possibility theory in order to complete the constraints. It is compared on an illustrative example to an aggregation-based approach using Choquet integral. The way constraints expressed in the Choquet integral setting translate into constraints in the proposed approach is discussed.

AIJ Journal 2007 Journal Article

Introducing possibilistic logic in ILP for dealing with exceptions

  • Mathieu Serrurier
  • Henri Prade

In this paper we propose a new formalization of the inductive logic programming (ILP) problem for a better handling of exceptions. It is now encoded in first-order possibilistic logic. This allows us to handle exceptions by means of prioritized rules, thus taking lessons from non-monotonic reasoning. Indeed, in classical first-order logic, the exceptions of the rules that constitute a hypothesis accumulate and classifying an example in two different classes, even if one is the right one, is not correct. The possibilistic formalization provides a sound encoding of non-monotonic reasoning that copes with rules with exceptions and prevents an example to be classified in more than one class. The benefits of our approach with respect to the use of first-order decision lists are pointed out. The possibilistic logic view of ILP problem leads to an optimization problem at the algorithmic level. An algorithm based on simulated annealing that in one turn computes the set of rules together with their priority levels is proposed. The reported experiments show that the algorithm is competitive to standard ILP approaches on benchmark examples.

ECAI Conference 2006 Conference Paper

A Similarity and Fuzzy Logic-Based Approach to Cerebral Categorisation

  • Julien Erny
  • Josette Pastor
  • Henri Prade

This work proposes a formal modelling of categorisation processes attempting at simulating the way information is categorised by neural populations in the human brain. The formalism mainly relies on a similarity-based approach to categorisation. It involves weighted rules that use inference and fusion techniques borrowed from fuzzy logic. The approach is illustrated by a simulation of the McGurck effect where the combination of contradictory auditory and visual stimuli creates an auditory perceptive illusion.

ECAI Conference 2006 Conference Paper

Background Default Knowledge and Causality Ascriptions

  • Jean-François Bonnefon
  • Rui Da Silva Neves
  • Didier Dubois
  • Henri Prade

A model is defined that predicts an agent's ascriptions of causality (and related notions of facilitation and justification) between two events in a chain, based on background knowledge about the normal course of the world. Background knowledge is represented by nonmonotonic consequence relations. This enables the model to handle situations of poor information, where background knowledge is not accurate enough to be represented in, e. g. , structural equations. Tentative properties of causality ascriptions are explored, i. e. , preference for abnormal factors, transitivity, coherence with logical entailment, and stability with respect to disjunction and conjunction. Empirical data are reported to support the psychological plausibility of our basic definitions.

ECAI Conference 2006 Conference Paper

Compiling Possibilistic Knowledge Bases

  • Salem Benferhat
  • Henri Prade

Possibilistic knowledge bases gather propositional formulas associated with degrees belonging to a linearly ordered scale. These degrees reflect certainty or priority, depending if the formulas encode pieces of beliefs or goals to be pursued. Possibilistic logic provides a simple format that turns to be useful for handling qualitative uncertainty, exceptions or preferences. The main result of the paper provides a way for compiling a possibilistic knowledge base in order to be able to process inference from it in polynomial time. The procedure is based on a symbolic treatment of the degrees under the form of sorted literals and on the idea of forgetting variables. The number of sorted literals that are added corresponds exactly to the number of priority levels existing in the base, and the number of binary clauses added in the compilation is also equal to this number of levels. The resulting extra compilation cost is very low.

ECAI Conference 2006 Conference Paper

Version Space Learning for Possibilistic Hypotheses

  • Henri Prade
  • Mathieu Serrurier

In this paper, we are interested in learning stratified hypotheses from examples and counter-examples associated with weights that express their prototypical importance. It leads to an extension of the well-known version space learning framework. In order to do that, we emphasize that the treatment of positive and negative examples in version space learning is reminding of a bipolar revision process recently studied in the setting of possibilistic information representation. Bipolarity appears when the positive and negative sides of information are specified in a distinct way. Then, we use the possibilistic bipolar representation setting, which distinguishes between what is guaranteed to be possible, and what is simply not impossible, as a basis for extending version space learning to examples associated with possibility degrees. It allows us to define a formal framework for learning layered hypotheses.

TIME Conference 2004 Conference Paper

A Possibility Theory-based Approach for Handling of Uncertain Relations Between Temporal Points

  • Allel HadjAli
  • Didier Dubois
  • Henri Prade

Uncertain relations between temporal points are represented by means of possibility distributions over the three basic relations "smaller than", "equal to", and "greater than". Operations for computing inverse relations, for composing relations, for combining relations coming from different sources and pertaining to the same temporal points, or for representing negative information, are defined. An illustrative example of representing and reasoning with uncertain temporal relations is given. This paper shows how possibilistic temporal uncertainty can be handled in the setting of point algebra. Moreover, the paper emphasizes the advantages of the possibilistic approach over a probabilistic approach previously proposed. This work does for the temporal point algebra what the authors previously did for the temporal interval algebra.

NMR Workshop 2004 Conference Paper

Generation and evaluation of different types of arguments in negotiation

  • Leila Amgoud
  • Henri Prade

Until now, AI argumentation-based systems have been mainly developed for handling inconsistency. In that explanation-oriented perspective, only one type of argument has been considered. Several argumentation frameworks have then been proposed for generating and evaluating such arguments. However, recent works on argumentation-based negotiation have emphasized different other types of arguments such as threats, rewards, appeals, etc. .. The purpose of this paper is to provide a logical framework which encompass the classical argumentation-based framework and handles the new types of arguments. More precisely, we give the logical definitions of these arguments and their weighting systems. These definitions take into account that negotiation dialogues involve not only agents’ beliefs (of various strengths) but also their goals (having maybe different priorities), the beliefs on the goals of other agents, etc. .. In other words, from the different belief and goal bases maintained by an agent, we can generate all the possible threats, rewards, explanations, appeals which are associated to them. Finally, we show how to evaluate conflicting arguments of different types. The possibilistic logic framework is used for handling formulas with different degrees of certainty or priority. Key words: Negotiation, Argumentation

NMR Workshop 2004 Conference Paper

Ordinal and absolute representations of positive information in possibilistic logic

  • Didier Dubois
  • Souhila Kaci
  • Henri Prade

There are two readings of a possibility distribution, a representation format which is useful for encoding uncertain knowledge or preferences. The negative information reading, based on possibility and necessity measures, emphasizes the fact that some interpretations are impossible, or at least have an upper-bounded possibility level. The positive information reading points out that possibility degrees are lower bounded, and thus that some interpretations have non-zero (guaranteed) possibility degrees. This paper provides technical results for the positive view, showing how sets of absolute, or relative, constraints expressed in terms of guaranteed possibility measures can be represented in terms of a possibility distribution. Using previously obtained results for the “negative interpretation side”, it enables us to jointly handle upper and lower logical specifications of a possibility distribution on partitions of the set of interpretations, as pointed out in the conclusion.

KR Conference 2004 Conference Paper

Reaching agreement through argumentation: A possibilistic approach

  • Leila Amgoud
  • Henri Prade

Negotiation plays a key role as a means for sharing information and resources with the aim of looking for a common agreement. This paper proposes a new approach based on possibility theory, which integrates both the merits of argumentation-based negotiation and of heuristic methods looking for making trade-offs. Possibilistic logic is used as a unified setting, which proves to be convenient not only for representing the mental states of the agents (beliefs possibly pervaded with uncertainty, and prioritized goals), but also for revising the belief bases and for describing the decision procedure for selecting a new offer.

KR Conference 2004 Conference Paper

Updating of a possibilistic knowledge base by crisp or fuzzy transition rules

  • Boris Mailh‚
  • Henri Prade

In this paper, partial knowledge about the possible transitions which can take place in a dynamical environment is represented by a set of pairs of propositional formulae, with the following intended meaning: If the first one is true then the second will be true at the next step. More generally, a certainty level representing the lower bound of a necessity measure can be associated with the second formula for modelling uncertain transitions. A possibilistic transition graph relating possible states can be deduced from these pairs and used for updating an uncertain knowledge base encoded in possibilistic logic and semantically associated to a possibility distribution. The updating of the base can be directly performed at the syntactic level from the transition pairs and the possibilistic logic formulas, in agreement with the semantics of the base and of the transitions. As there are many sets of pairs that represent the same transition graph, it is convenient to find a representative in this class that gives the easiest syntactic computations. Besides, a second type of weighted pairs of formulas is introduced for refining the representation of the available information about transitions. While the first type of pairs encodes the transitions that are not non-impossible (using necessity measures), the second type of pairs provides the transitions whose possibility is guaranteed. This constitutes a bipolar description of the possible transitions.

UAI Conference 2004 Conference Paper

Using Arguments for Making Decisions: A Possibilistic Logic Approach

  • Leila Amgoud
  • Henri Prade

Humans currently use arguments for explaining choices which are already made, or for evaluating potential choices. Each potential choice has usually pros and cons of various strengths. In spite of the usefulness of arguments in a decision making process, there have been few formal proposals handling this idea if we except works by Fox and Parsons and by Bonet and Geffner. In this paper we propose a possibilistic logic framework where arguments are built from an uncertain knowledge base and a set of prioritized goals. The proposed approach can compute two kinds of decisions by distinguishing between pessimistic and optimistic attitudes. When the available, maybe uncertain, knowledge is consistent, as well as the set of prioritized goals (which have to be fulfilled as far as possible), the method for evaluating decisions on the basis of arguments agrees with the possibility theory-based approach to decision-making under uncertainty. Taking advantage of its relation with formal approaches to defeasible argumentation, the proposed framework can be generalized in case of partially inconsistent knowledge, or goal bases.

IJCAI Conference 2003 Conference Paper

Categorizing classes of signals by means of fuzzy gradual rules

  • Sylvie Galichet
  • Didier Dubois
  • Henri Prade

This paper presents an approach to the approximate description of univariate real-valued functions in terms of precise or imprecise reference points and interpolation between these points. It is achieved by means of gradual rules which express that the closer the variable to the abscissa of a reference point, the closer the value of the function to the ordinate of this reference point. Gradual rules enable us to specify sophisticated gauges, under the form of connected areas, inside of which the function belonging to the class under consideration should remain. This provides a simple and efficient tool for categorizing signals. This tool can be further improved by making the gauge flexible by means of fuzzy gradual rules. This is illustrated on a benchmark example.

UAI Conference 2002 Conference Paper

Bipolar Possibilistic Representations

  • Salem Benferhat
  • Didier Dubois
  • Souhila Kaci
  • Henri Prade

Recently, it has been emphasized that the possibility theory framework allows us to distinguish between i) what is possible because it is not ruled out by the available knowledge, and ii) what is possible for sure. This distinction may be useful when representing knowledge, for modelling values which are not impossible because they are consistent with the available knowledge on the one hand, and values guaranteed to be possible because reported from observations on the other hand. It is also of interest when expressing preferences, to point out values which are positively desired among those which are not rejected. This distinction can be encoded by two types of constraints expressed in terms of necessity measures and in terms of guaranteed possibility functions, which induce a pair of possibility distributions at the semantic level. A consistency condition should ensure that what is claimed to be guaranteed as possible is indeed not impossible. The present paper investigates the representation of this bipolar view, including the case when it is stated by means of conditional measures, or by means of comparative context-dependent constraints. The interest of this bipolar framework, which has been recently stressed for expressing preferences, is also pointed out in the representation of diagnostic knowledge.

NMR Workshop 2002 Conference Paper

Possibility theory in information fusion

  • Didier Dubois
  • Henri Prade

Possibility theory and the body of aggregation operations from fuzzy set theory provide some tools to address the problem of merging information coming from several sources. Possibility theory is a representation framework that can model various kinds of information items: numbers, intervals, consonant random sets, special kind of probability families, as well as linguistic information, and —_ uncertain formulae in logical settings. The possibilistic approach to fusion is general enough to encompass logical modes of combination (conjunctive and disjunctive) as well as fusion modes used in statistics. This general framework allows to import inconsistency handling methods, inherited from logic, into numerical fusion problems. The approach applies to sensor fusion, aggregation of expert opinions as well as the merging of databases especially in case of poor, qualitative information.

UAI Conference 2001 Conference Paper

Graphical readings of possibilistic logic bases

  • Salem Benferhat
  • Didier Dubois
  • Souhila Kaci
  • Henri Prade

Possibility theory offers either a qualitive, or a numerical framework for representing uncertainty, in terms of dual measures of possibility and necessity. This leads to the existence of two kinds of possibilistic causal graphs where the conditioning is either based on the minimum , or the product operator. Benferhat et al. (1999) have investigated the connections between min-based graphs and possibilistic logic bases (made of classical formulas weighted in terms of certainty). This paper deals with a more difficult issue : the product-based graphical representations of possibilistic bases, which provides an easy structural reading of possibilistic bases.Moreover, this paper also provides another reading of possibilistic bases in terms of comparative preferences of the form "in the context p, q is preferred to not q". This enables us to explicit preferences underlying a set of goals with different levels of priority.

UAI Conference 2000 Conference Paper

A principled analysis of merging operations in possibilistic logic

  • Souhila Kaci
  • Salem Benferhat
  • Didier Dubois
  • Henri Prade

Possibilistic logic offers a qualitative framework for representing pieces of information associated with levels of uncertainty of priority. The fusion of multiple sources information is discussed in this setting. Different classes of merging operators are considered including conjunctive, disjunctive, reinforcement, adaptive and averaging operators. Then we propose to analyse these classes in terms of postulates. This is done by first extending the postulate for merging classical bases to the case where priorites are avaialbe.

UAI Conference 1999 Conference Paper

Assessing the value of a candidate: Comparing belief function and possibility theories

  • Didier Dubois
  • Michel Grabisch
  • Henri Prade
  • Philippe Smets

The problem of assessing the value of a candidate is viewed here as a multiple combination problem. On the one hand a candidate can be evaluated according to different criteria, and on the other hand several experts are supposed to assess the value of candidates according to each criterion. Criteria are not equally important, experts are not equally competent or reliable. Moreover levels of satisfaction of criteria, or levels of confidence are only assumed to take their values in qualitative scales which are just linearly ordered. The problem is discussed within two frameworks, the transferable belief model and the qualitative possibility theory. They respectively offer a quantitative and a qualitative setting for handling the problem, providing thus a way to compare the nature of the underlying assumptions.

AAAI Conference 1999 Conference Paper

Implicative and Conjunctive Fuzzy Rules — A Tool for Reasoning from Knowledge and Examples

  • Laurent Ughetto
  • IRISA
  • IUT de Lannion; Didier Dubois
  • Henri Prade
  • IRIT - CNRS Université Paul Sabatier

Fuzzy rule-based systems have been mainly used as a convenient tool for synthesizing control laws from data. Recently, in a knowledge representation-oriented perspective, a typology of fuzzy rules has been laid bare, by emphasizing the distinction between implicative and conjunctive fuzzy rules. The former describe pieces of generic knowledge either tainted with uncertainty or tolerant to similarity, while the latter encode examples-originated information expressing either mere possibilities or how typical situations can be extrapolated. The different types of fuzzy rules are first contrasted, and their representation discussed in the framework of possibility theory. Then, the paper studies the conjoint use of fuzzy rules expressing knowledge (as fuzzy constraints which restrict the possible states of the world), or gathering examples (which testify the possibility of appearance of some states). Coherence and inference issues are briefly addressed.

UAI Conference 1999 Conference Paper

Possibilistic logic bases and possibilistic graphs

  • Salem Benferhat
  • Didier Dubois
  • Laurent Garcia
  • Henri Prade

Possibilistic logic bases and possibilistic graphs are two different frameworks of interest for representing knowledge. The former stratifies the pieces of knowledge (expressed by logical formulas) according to their level of certainty, while the latter exhibits relationships between variables. The two types of representations are semantically equivalent when they lead to the same possibility distribution (which rank-orders the possible interpretations). A possibility distribution can be decomposed using a chain rule which may be based on two different kinds of conditioning which exist in possibility theory (one based on product in a numerical setting, one based on minimum operation in a qualitative setting). These two types of conditioning induce two kinds of possibilistic graphs. In both cases, a translation of these graphs into possibilistic bases is provided. The converse translation from a possibilistic knowledge base into a min-based graph is also described.

UAI Conference 1998 Conference Paper

Comparative uncertainty, belief functions and accepted beliefs

  • Didier Dubois
  • Hélène Fargier
  • Henri Prade

This paper relates comparative belief structures and a general view of belief management in the setting of deductively closed logical representations of accepted beliefs. We show that the range of compatibility between the classical deductive closure and uncertain reasoning covers precisely the nonmonotonic 'preferential' inference system of Kraus, Lehmann and Magidor and nothing else. In terms of uncertain reasoning any possibility or necessity measure gives birth to a structure of accepted beliefs. The classes of probability functions and of Shafer's belief functions which yield belief sets prove to be very special ones.

AAAI Conference 1998 Conference Paper

Logical Representation and Computation of Optimal Decisions in a Qualitative Setting

  • Didier Dubois
  • Henri Prade

This paper describes a logical machineryfor computing decisions based on an ATMS procedure, wherethe available knowledge on the state of the world is describedbya possibilistic propositionallogic base(i. e. , a collectionof logical statements associatedwithqualitative certainty levels). Thepreferencesof the userare also describedbyanotherpossibilistic logic basewhose formulaweightsare interpreted in termsof priorities and formulasexpressgoals. Two attitudes are allowed for the decisionmaker: a pessimisticuncertainty-averse one and an optimistic one. Thecomputed decisions are in agreement witha qualitative counterpartto classical expectedutility theoryfor decisionunderuncertainty.

UAI Conference 1998 Conference Paper

Qualitative Decision Theory with Sugeno Integrals

  • Didier Dubois
  • Henri Prade
  • Régis Sabbadin

This paper presents an axiomatic framework for qualitative decision under uncertainty in a finite setting. The corresponding utility is expressed by a sup-min expression, called Sugeno (or fuzzy) integral. Technically speaking, Sugeno integral is a median, which is indeed a qualitative counterpart to the averaging operation underlying expected utility. The axiomatic justification of Sugeno integral-based utility is expressed in terms of preference between acts as in Savage decision theory. Pessimistic and optimistic qualitative utilities, based on necessity and possibility measures, previously introduced by two of the authors, can be retrieved in this setting by adding appropriate axioms.

UAI Conference 1997 Conference Paper

Decision-Making under Ordinal Preferences and Comparative Uncertainty

  • Didier Dubois
  • Hélène Fargier
  • Henri Prade

This paper investigates the problem of finding a preference relation on a set of acts from the knowledge of an ordering on events (subsets of states of the world) describing the decision-maker (DM)s uncertainty and an ordering of consequences of acts, describing the DMs preferences. However, contrary to classical approaches to decision theory, we try to do it without resorting to any numerical representation of utility nor uncertainty, and without even using any qualitative scale on which both uncertainty and preference could be mapped. It is shown that although many axioms of Savage theory can be preserved and despite the intuitive appeal of the method for constructing a preference over acts, the approach is inconsistent with a probabilistic representation of uncertainty, but leads to the kind of uncertainty theory encountered in non-monotonic reasoning (especially preferential and rational inference), closely related to possibility theory. Moreover the method turns out to be either very little decisive or to lead to very risky decisions, although its basic principles look sound. This paper raises the question of the very possibility of purely symbolic approaches to Savage-like decision-making under uncertainty and obtains preliminary negative results.

AIJ Journal 1997 Journal Article

Nonmonotonic reasoning, conditional objects and possibility theory

  • Salem Benferhat
  • Didier Dubois
  • Henri Prade

This short paper relates the conditional object-based and possibility theory-based approaches for reasoning with conditional statements pervaded with exceptions, to other methods in nonmonotonic reasoning which have been independently proposed: namely, Lehmann's preferential and rational closure entailments which obey normative postulates, the infinitesimal probability approach, and the conditional (modal) logics-based approach. All these methods are shown to be equivalent with respect to their capabilities for reasoning with conditional knowledge although they are based on different modeling frameworks. It thus provides a unified understanding of nonmonotonic consequence relations. More particularly, conditional objects, a purely qualitative counterpart to conditional probabilities, offer a very simple semantics, based on a 3-valued calculus, for the preferential entailment, while in the purely ordinal setting of possibility theory both the preferential and the rational closure entailments can be represented.

UAI Conference 1996 Conference Paper

Belief Revision with Uncertain Inputs in the Possibilistic Setting

  • Didier Dubois
  • Henri Prade

This paper discusses belief revision under uncertain inputs in the framework of possibility theory. Revision can be based on two possible definitions of the conditioning operation, one based on min operator which requires a purely ordinal scale only, and another based on product, for which a richer structure is needed, and which is a particular case of Dempster's rule of conditioning. Besides, revision under uncertain inputs can be understood in two different ways depending on whether the input is viewed, or not, as a constraint to enforce. Moreover, it is shown that M.A. Williams' transmutations, originally defined in the setting of Spohn's functions, can be captured in this framework, as well as Boutilier's natural revision.

UAI Conference 1996 Conference Paper

Coping with the Limitations of Rational Inference in the Framework of Possibility Theory

  • Salem Benferhat
  • Didier Dubois
  • Henri Prade

Possibility theory offers a framework where both Lehmann's "preferential inference" and the more productive (but less cautious) "rational closure inference" can be represented. However, there are situations where the second inference does not provide expected results either because it cannot produce them, or even provide counter-intuitive conclusions. This state of facts is not due to the principle of selecting a unique ordering of interpretations (which can be encoded by one possibility distribution), but rather to the absence of constraints expressing pieces of knowledge we have implicitly in mind. It is advocated in this paper that constraints induced by independence information can help finding the right ordering of interpretations. In particular, independence constraints can be systematically assumed with respect to formulas composed of literals which do not appear in the conditional knowledge base, or for default rules with respect to situations which are "normal" according to the other default rules in the base. The notion of independence which is used can be easily expressed in the qualitative setting of possibility theory. Moreover, when a counter-intuitive plausible conclusion of a set of defaults, is in its rational closure, but not in its preferential closure, it is always possible to repair the set of defaults so as to produce the desired conclusion.

UAI Conference 1995 Conference Paper

Numerical representations of acceptance

  • Didier Dubois
  • Henri Prade

Accepting a proposition means that our confidence in this proposition is strictly greater than the confidence in its negation. This paper investigates the subclass of uncertainty measures, expressing confidence, that capture the idea of acceptance, what we call acceptance functions. Due to the monotonicity property of confidence measures, the acceptance of a proposition entails the acceptance of any of its logical consequences. In agreement with the idea that a belief set (in the sense of Gardenfors) must be closed under logical consequence, it is also required that the separate acceptance o two propositions entail the acceptance of their conjunction. Necessity (and possibility) measures agree with this view of acceptance while probability and belief functions generally do not. General properties of acceptance functions are estabilished. The motivation behind this work is the investigation of a setting for belief revision more general than the one proposed by Alchourron, Gardenfors and Makinson, in connection with the notion of conditioning.

UAI Conference 1995 Conference Paper

Practical model-based diagnosis with qualitative possibilistic uncertainty

  • Didier Cayrac
  • Didier Dubois
  • Henri Prade

An approach to fault isolation that exploits vastly incomplete models is presented. It relies on separate descriptions of each component behavior, together with the links between them, which enables focusing of the reasoning to the relevant part of the system. As normal observations do not need explanation, the behavior of the components is limited to anomaly propagation. Diagnostic solutions are disorders (fault modes or abnormal signatures) that are consistent with the observations, as well as abductive explanations. An ordinal representation of uncertainty based on possibility theory provides a simple exception-tolerant description of the component behaviors. We can for instance distinguish between effects that are more or less certainly present (or absent) and effects that are more or less certainly present (or absent) when a given anomaly is present. A realistic example illustrates the benefits of this approach.

UAI Conference 1994 Conference Paper

An Ordinal View of Independence with Application to Plausible Reasoning

  • Didier Dubois
  • Luis Fariñas del Cerro
  • Andreas Herzig
  • Henri Prade

An ordinal view of independence is studied in the framework of possibility theory. We investigate three possible definitions of dependence, of increasing strength. One of them is the counterpart to the multiplication law in probability theory, and the two others are based on the notion of conditional possibility. These two have enough expressive power to support the whole possibility theory, and a complete axiomatization is provided for the strongest one. Moreover we show that weak independence is well-suited to the problems of belief change and plausible reasoning, especially to address the problem of blocking of property inheritance in exception-tolerant taxonomic reasoning.

KER Journal 1994 Journal Article

Non-standard theories of uncertainty in knowledge representation and reasoning

  • Didier Duboid
  • Henri Prade

Abstract This paper provides a survey of the state of the art in plausible reasoning, that is exception tolerant reasoning under incomplete information. Three requirements are necessary for a formalism in order to cope with this problem: (i) making a clear distinction between factual information and generic knowledge; (ii) having a correct representation of partial ignorance; (iii) providing a nonmonotonic inference mechanism. Classical logic fails on requirements (i) and (iii), whilst the Bayesian approach does not fulfil (ii) in an unbiased way. In this perspective, various uncertainty modelling frameworks are reviewed: MYCIN-like fully compositional calculi, belief functions, upper and lower probability systems, and possibility theory. Possibility theory enables classical logic to be extended to layered sets of formulae, where layers express certainty levels. Finally, it is explained how generic knowledge can be expressed by constraints on possibility measures, and how possibilistic inferences can encode nonmonotonic reasoning in agreement with the Lehmann et al. postulates.

UAI Conference 1993 Conference Paper

A fuzzy relation-based extension of Reggia's relational model for diagnosis handling uncertain and incomplete information

  • Didier Dubois
  • Henri Prade

Relational models for diagnosis are based on a direct description of the association between disorders and manifestations. This type of model has been specially used and developed by Reggia and his co-workers in the late eighties as a basic starting point for approaching diagnosis problems. The paper proposes a new relational model which includes Reggia's model as a particular case and which allows for a more expressive representation of the observations and of the manifestations associated with disorders. The model distinguishes, i) between manifestations which are certainly absent and those which are not (yet) observed, and ii) between manifestations which cannot be caused by a given disorder and manifestations for which we do not know if they can or cannot be caused by this disorder. This new model, which can handle uncertainty in a non-probabilistic way, is based on possibility theory and so-called twofold fuzzy sets, previously introduced by the authors.

UAI Conference 1993 Conference Paper

Argumentative inference in uncertain and inconsistent knowledge bases

  • Salem Benferhat
  • Didier Dubois
  • Henri Prade

This paper presents and discusses several methods for reasoning from inconsistent knowledge bases. A so-called argumentative-consequence relation taking into account the existence of consistent arguments in favor of a conclusion and the absence of consistent arguments in favor of its contrary, is particularly investigated. Flat knowledge bases, i.e. without any priority between their elements, as well as prioritized ones where some elements are considered as more strongly entrenched than others are studied under different consequence relations. Lastly a paraconsistent-like treatment of prioritized knowledge bases is proposed, where both the level of entrenchment and the level of paraconsistency attached to a formula are propagated. The priority levels are handled in the framework of possibility theory.

IJCAI Conference 1993 Conference Paper

Belief revision and updates in numerical formalisms� An overview, with new results for the possibilistic framework

  • Didier Dubois
  • Henri Prade

The difference between Bayesian conditioning and Lewis' imaging is somewhat similar to the one between Gardenfors' belief revision and Katsuno and Mendelzon' updating in the logical framework. Counterparts in possibility theory of these two operations are presented, including the case of conditioning upon an uncertain observation. Possibilistic conditioning satisfies all the postulates for belief revision, and possibilistic imaging all the updating postulates. Lastly, a third operation called "focusing" is naturally introduced in the setting of belief and plausibility functions.

IJCAI Conference 1993 Conference Paper

Inconsistency management and prioritized syntax-based entailment

  • Salem Benferhat
  • Claudette Cayrol
  • Didier Dubois
  • Jerome Lang
  • Henri Prade

The idea of ordering plays a basic role in commonsense reasoning for addressing three interrelated tasks: inconsistency handling, belief revision and plausible inference. We study the behavior of non-monotonic inferences induced by various methods for priority-based handling of inconsistent sets of classical formulas. One of them is based on a lexicographic ordering of maximal consistent subsets, and refines Brewka's preferred sub-theories. This new approach leads to a non-monotonic inference which satisfies the "rationality" property while solving the problem of blocking of property inheritance. It differs from and improves previous equivalent approaches such as Gardenfors and Makinson's expectation-based inference, Pearl's System Z and possibilistic logic.

UAI Conference 1992 Conference Paper

A Symbolic Approach to Reasoning with Linguistic Quantifiers

  • Didier Dubois
  • Henri Prade
  • Lluís Godo
  • Ramón López de Mántaras

This paper investigates the possibility of performing automated reasoning in probabilistic logic when probabilities are expressed by means of linguistic quantifiers. Each linguistic term is expressed as a prescribed interval of proportions. Then instead of propagating numbers, qualitative terms are propagated in accordance with the numerical interpretation of these terms. The quantified syllogism, modelling the chaining of probabilistic rules, is studied in this context. It is shown that a qualitative counterpart of this syllogism makes sense, and is relatively independent of the threshold defining the linguistically meaningful intervals, provided that these threshold values remain in accordance with the intuition. The inference power is less than that of a full-fledged probabilistic con-quaint propagation device but better corresponds to what could be thought of as commonsense probabilistic reasoning.

UAI Conference 1991 Conference Paper

A Logic of Graded Possibility and Certainty Coping with Partial Inconsistency

  • Jérôme Lang
  • Didier Dubois
  • Henri Prade

A semantics is given to possibilistic logic, a logic that handles weighted classical logic formulae, and where weights are interpreted as lower bounds on degrees of certainty or possibility, in the sense of Zadeh's possibility theory. The proposed semantics is based on fuzzy sets of interpretations. It is tolerant to partial inconsistency. Satisfiability is extended from interpretations to fuzzy sets of interpretations, each fuzzy set representing a possibility distribution describing what is known about the state of the world. A possibilistic knowledge base is then viewed as a set of possibility distributions that satisfy it. The refutation method of automated deduction in possibilistic logic, based on previously introduced generalized resolution principle is proved to be sound and complete with respect to the proposed semantics, including the case of partial inconsistency.

UAI Conference 1991 Conference Paper

Constraint Propagation with Imprecise Conditional Probabilities

  • Stéphane Amarger
  • Didier Dubois
  • Henri Prade

An approach to reasoning with default rules where the proportion of exceptions, or more generally the probability of encountering an exception, can be at least roughly assessed is presented. It is based on local uncertainty propagation rules which provide the best bracketing of a conditional probability of interest from the knowledge of the bracketing of some other conditional probabilities. A procedure that uses two such propagation rules repeatedly is proposed in order to estimate any simple conditional probability of interest from the available knowledge. The iterative procedure, that does not require independence assumptions, looks promising with respect to the linear programming method. Improved bounds for conditional probabilities are given when independence assumptions hold.

AIJ Journal 1991 Journal Article

Epistemic entrenchment and possibilistic logic

  • Didier Dubois
  • Henri Prade

This note points out the close relationships existing between recent proposals in the theory of belief revision made by Gardenförs based on the notion of epistemic entrenchment, and possibility theory applied to automated reasoning under uncertainty. It is claimed that the only numerical counterparts of epistemic entrenchment relations are so-called necessity measures that are dual to possibility measures, and are also mathematically equivalent to consonant belief functions in the sense of Shafer. Relationships between Spohn's ordinal conditional functions and possibility theory are also laid bare.

UAI Conference 1990 Conference Paper

Updating with belief functions, ordinal conditional functions and possibility measures

  • Didier Dubois
  • Henri Prade

This paper discusses how a measure of uncertainty representing a state of knowledge can be updated when a new information, which may be pervaded with uncertainty, becomes available. This problem is considered in various framework, namely: Shafer's evidence theory, Zadeh's possibility theory, Spohn's theory of epistemic states. In the two first cases, analogues of Jeffrey's rule of conditioning are introduced and discussed. The relations between Spohn's model and possibility theory are emphasized and Spohn's updating rule is contrasted with the Jeffrey-like rule of conditioning in possibility theory. Recent results by Shenoy on the combination of ordinal conditional functions are reinterpreted in the language of possibility theory. It is shown that Shenoy's combination rule has a well-known possibilistic counterpart.

AIJ Journal 1988 Journal Article

Default reasoning and possibility theory

  • Didier Dubois
  • Henri Prade

This note discusses an approach, recently outlined by Ron Yager, to default reasoning based on possibility theory. Some limitations of his technique are pointed out, and remedied in the same theoretical framework. The proposed approach leads to address the question of fusing a default value with a piece of incomplete but certain information which may only partially contradict the default value.