Arrow Research search

Author name cluster

David Poole

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

39 papers
1 author row

Possible papers

39

AAAI Conference 2026 Conference Paper

Why Isn’t Relational Learning Taking Over the World?

  • David Poole

Artificial intelligence seems to be taking over the world with systems that model pixels, words, and phonemes. The world is arguably made up, not of pixels, words, and phonemes but of entities (objects, things, including events) with properties and relations among them. Surely we should model these, not the perception or description of them. You might suspect that concentrating on modeling words and pixels is because all of the (valuable) data in the world is in terms of text and images. If you look into almost any company you will find their most valuable data is in spreadsheets, databases and other relational formats. These are not the form that are studied in introductory machine learning, but are full of product numbers, student numbers, transaction numbers and other identifiers that can't be interpreted naively as numbers. The field that studies this sort of data has various names including relational learning, statistical relational AI, and many others. This paper explains why relational learning is not taking over the world -- except in a few cases with restricted relations -- and what needs to be done to bring it to it's rightful prominence.

JMLR Journal 2023 Journal Article

Knowledge Hypergraph Embedding Meets Relational Algebra

  • Bahare Fatemi
  • Perouz Taslakian
  • David Vazquez
  • David Poole

Relational databases are a successful model for data storage, and rely on query languages for information retrieval. Most of these query languages are based on relational algebra, a mathematical formalization at the core of relational models. Knowledge graphs are flexible data storage structures that allow for knowledge completion using machine learning techniques. Knowledge hypergraphs generalize knowledge graphs by allowing multi-argument relations. This work studies knowledge hypergraph completion through the lens of relational algebra and its core operations. We explore the space between relational algebra foundations and machine learning techniques for knowledge completion. We investigate whether such methods can capture high-level abstractions in terms of relational algebra operations. We propose a simple embedding-based model called Relational Algebra Embedding (ReAlE) that performs link prediction in knowledge hypergraphs. We show theoretically that ReAlE is fully expressive and can represent the relational algebra operations of renaming, projection, set union, selection, and set difference. We verify experimentally that ReAlE outperforms state-of-the-art models in knowledge hypergraph completion, and in representing each of these primitive relational algebra operations. For the latter experiment, we generate a synthetic knowledge hypergraph, for which we design an algorithm based on the Erdos-R'enyi model for generating random graphs. [abs] [ pdf ][ bib ] [ code ] &copy JMLR 2023. ( edit, beta )

IJCAI Conference 2020 Conference Paper

Knowledge Hypergraphs: Prediction Beyond Binary Relations

  • Bahare Fatemi
  • Perouz Taslakian
  • David Vazquez
  • David Poole

Knowledge graphs store facts using relations between two entities. In this work, we address the question of link prediction in knowledge hypergraphs where relations are defined on any number of entities. While techniques exist (such as reification) that convert non-binary relations into binary ones, we show that current embedding-based methods for knowledge graph completion do not work well out of the box for knowledge graphs obtained through these techniques. To overcome this, we introduce HSimplE and HypE, two embedding-based methods that work directly with knowledge hypergraphs. In both models, the prediction is a function of the relation embedding, the entity embeddings and their corresponding positions in the relation. We also develop public datasets, benchmarks and baselines for hypergraph prediction and show experimentally that the proposed models are more effective than the baselines.

IJCAI Conference 2020 Conference Paper

Predicting Landslides Using Locally Aligned Convolutional Neural Networks

  • Ainaz Hajimoradlou
  • Gioachino Roberti
  • David Poole

Landslides, movement of soil and rock under the influence of gravity, are common phenomena that cause significant human and economic losses every year. Experts use heterogeneous features such as slope, elevation, land cover, lithology, rock age, and rock family to predict landslides. To work with such features, we adapted convolutional neural networks to consider relative spatial information for the prediction task. Traditional filters in these networks either have a fixed orientation or are rotationally invariant. Intuitively, the filters should orient uphill, but there is not enough data to learn the concept of uphill; instead, it can be provided as prior knowledge. We propose a model called Locally Aligned Convolutional Neural Network, LACNN, that follows the ground surface at multiple scales to predict possible landslide occurrence for a single point. To validate our method, we created a standardized dataset of georeferenced images consisting of the heterogeneous features as inputs, and compared our method to several baselines, including linear regression, a neural network, and a convolutional network, using log-likelihood error and Receiver Operating Characteristic curves on the test set. Our model achieves 2-7% improvement in terms of accuracy and 2-15% boost in terms of log likelihood compared to the other proposed baselines.

AAAI Conference 2019 Conference Paper

Improved Knowledge Graph Embedding Using Background Taxonomic Information

  • Bahare Fatemi
  • Siamak Ravanbakhsh
  • David Poole

Knowledge graphs are used to represent relational information in terms of triples. To enable learning about domains, embedding models, such as tensor factorization models, can be used to make predictions of new triples. Often there is background taxonomic information (in terms of subclasses and subproperties) that should also be taken into account. We show that existing fully expressive (a.k.a. universal) models cannot provably respect subclass and subproperty information. We show that minimal modifications to an existing knowledge graph completion method enables injection of taxonomic information. Moreover, we prove that our model is fully expressive, assuming a lower-bound on the size of the embeddings. Experimental results on public knowledge graphs show that despite its simplicity our approach is surprisingly effective.

AAAI Conference 2018 Conference Paper

RelNN: A Deep Neural Model for Relational Learning

  • Seyed Mehran Kazemi
  • David Poole

Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning. 1

NeurIPS Conference 2018 Conference Paper

SimplE Embedding for Link Prediction in Knowledge Graphs

  • Seyed Mehran Kazemi
  • David Poole

Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE's code is available on GitHub at https: //github. com/Mehran-k/SimplE.

KR Conference 2018 Short Paper

Structure Learning for Relational Logistic Regression: An Ensemble Approach

  • Nandini Ramanan
  • Gautam Kunapuli
  • Tushar Khot
  • Bahare Fatemi
  • Seyed Mehran Kazemi
  • David Poole
  • Kristian Kersting
  • Sriraam Natarajan

We consider the problem of learning Relational Logistic Regression (RLR). Unlike standard logistic regression, the features of RLRs are first-order formulae with associated weight vectors instead of scalar weights. We turn the problem of learning RLR to learning these vector-weighted formulae and develop a learning algorithm based on functional-gradient boosting methods for probabilistic logic models. Our empirical evaluation on standard data sets demonstrates the superiority of our approach over other methods for learning RLR.

KR Conference 2016 Short Paper

Knowledge Compilation for Lifted Probabilistic Inference: Compiling to a Low-Level Language

  • Seyed Mehran Kazemi
  • David Poole

as straight-line programs generated by searching the entire Algorithms based on first-order knowledge compilation are currently the state-of-the-art for lifted inference. These algorithms typically compile a probabilistic relational model into an intermediate data structure and use it to answer many inference queries. In this paper, we propose compiling a probabilistic relational model directly into a low-level target (e. g., C or C++) program instead of an intermediate data structure and taking advantage of advances in program compilation. Our experiments represent orders of magnitude speedup compared to existing approaches. Probabilistic relational models (PRMs) (Getoor and

KR Conference 2016 Short Paper

Negation Without Negation in Probabilistic Logic Programming

  • David Buchman
  • David Poole

Probabilistic logic programs without negation can have cycles (with a preference for false), but cannot represent all conditional distributions. Probabilistic logic programs with negation can represent arbitrary conditional probabilities, but with cycles they create logical inconsistencies. We show how allowing negative noise probabilities allows us to represent arbitrary conditional probabilities without negations. Noise probabilities for non-exclusive rules are difficult to interpret and unintuitive to manipulate; to alleviate this we define “probability-strengths” which provide an intuitive additive algebra for combining rules. For acyclic programs we prove what constraints on the strengths allow for proper distributions on the non-noise variables and allow for all non-extreme distributions to be represented. We show how arbitrary CPDs can be converted into this form in a canonical way. Furthermore, if a joint distribution can be compactly represented by a cyclic program with negations, we show how it can also be compactly represented with negative noise probabilities and no negations. This allows algorithms for exact inference that do not support negations to be applicable to probabilistic logic programs with negations. Programs We use capital letters for random variables, and lower-case letters for them being true, e. g., b means B = true. A (probabilistic) rule has the form p: head ← body, where p is a probability, head is a positive literal, and body is a conjunction of other, positive or negative, literals. When p = 1, it can be omitted, and the rule is called a deterministic rule. The probabilistic aspect is captured using a set of special “noise” variables N1, N2,..., NN. Each noise variable appears exactly once as a rule head, in special probabilistic rules called probabilistic facts with the form pi: ni. Other probabilistic rules are actually only syntactic sugar: p: head ← body is short for p: ni and head ← ni ∧ body, where Ni is a new noise variable not used by any other rule. A program is a multiset of rules. For convenience, we sometimes treat programs as sets; however, unions may produce programs with recurring rules. A program with no noise variables is a deterministic program. Programs can be either cyclic or acyclic. A model is an assignment to all the variables, represented as a set of positive literals. We use the stable-model semantics (Gelfond and Lifschitz 1988). We take the semantics of a deterministic program to be its unique stable model. If it does not have a unique stable model, we call the program “(logically) inconsistent”. Inconsistency only arises in cyclic programs, when a cycle of rules contains a negation (Apt and Bezem 1991). A deterministic program realization (DPR) for a program R is a deterministic program derived from R by having every probabilistic fact pi: ni either omitted or converted to the deterministic ni. The semantics of a probabilistic program is a distribution over its 2N DPRs, where, inde-

NeurIPS Conference 2016 Conference Paper

New Liftable Classes for First-Order Probabilistic Inference

  • Seyed Mehran Kazemi
  • Angelika Kimmig
  • Guy Van den Broeck
  • David Poole

Statistical relational models provide compact encodings of probabilistic dependencies in relational domains, but result in highly intractable graphical models. The goal of lifted inference is to carry out probabilistic inference without needing to reason about each individual separately, by instead treating exchangeable, undistinguished objects as a whole. In this paper, we study the domain recursion inference rule, which, despite its central role in early theoretical results on domain-lifted inference, has later been believed redundant. We show that this rule is more powerful than expected, and in fact significantly extends the range of models for which lifted inference runs in time polynomial in the number of individuals in the domain. This includes an open problem called S4, the symmetric transitivity model, and a first-order logic encoding of the birthday paradox. We further identify new classes S2FO2 and S2RU of domain-liftable theories, which respectively subsume FO2 and recursively unary theories, the largest classes of domain-liftable theories known so far, and show that using domain recursion can achieve exponential speedup even in theories that cannot fully be lifted with the existing set of inference rules.

KR Conference 2016 Conference Paper

Probabilistic Models over Weighted Orderings: Fixed-Parameter Tractable Variable Elimination

  • Thomas Lukasiewicz
  • Maria Vanina Martinez
  • David Poole
  • Gerardo Simari

Probabilistic models with weighted formulas, known as Markov models or log-linear models, are used in many domains. Recent models of weighted orderings between elements that have been proposed as flexible tools to express preferences under uncertainty, are also potentially useful in applications like planning, temporal reasoning, and user modeling. Their computational properties are very different from those of conventional Markov models; because of the transitivity of the “less than” relation, standard methods that exploit structure of the models, such as variable elimination, are not directly applicable, as there are no conditional independencies between the orderings within connected components. The best known algorithms for general inference in these models are exponential in the number of statements. Here, we present the first algorithms that exploit the available structure. We begin with the special case of models in the form of chains; we present an exact O(n3) algorithm, where n is the total number of elements. Next, we generalize this technique to models in which the set of statements are comprised of arbitrary sets of atomic weighted preference formulas (while the query and evidence are conjunctions of atomic preference formulas), and the resulting exact algorithm runs in time O(m ∗ n2 ∗ nc), where m is the number of preference formulas, n is the number of elements, and c is the maximum number of elements in a linear cut (which depends both on the structure of the model and the order in which the elements are processed)—therefore, this algorithm is tractable for cases in which c can be bounded to a low value. Finally, we report on the results of an empirical evaluation of both algorithms, showing how they scale with reasonably-sized models.

AAAI Conference 2015 Conference Paper

Representing Aggregators in Relational Probabilistic Models

  • David Buchman
  • David Poole

We consider the problem of, given a probabilistic model on a set of random variables, how to add a new variable that depends on the other variables, without changing the original distribution. In particular, we consider relational models (such as Markov logic networks (MLNs)), where we cannot directly define conditional probabilities. In relational models, there may be an unbounded number of parents in the grounding, and conditional distributions need to be defined in terms of aggregators. The question we ask is whether and when it is possible to represent conditional probabilities at all in various relational models. Some aggregators have been shown to be representable by MLNs, by adding auxiliary variables; however it was unknown whether they could be defined without auxiliary variables. For other aggregators, it was not known whether they can be represented by MLNs at all. We obtained surprisingly strong negative results on the capability of flexible undirected relational models such as MLNs to represent aggregators without affecting the original model’s distribution. We provide a map of what aspects of the models, including the use of auxiliary variables and quantifiers, result in the ability to represent various aggregators. In addition, we provide proof techniques which can be used to facilitate future theoretic results on relational models, and demonstrate them on relational logistic regression (RLR).

AAAI Conference 2014 Conference Paper

Elimination Ordering in Lifted First-Order Probabilistic Inference

  • Seyed Mehran Kazemi
  • David Poole

Various representations and inference methods have been proposed for lifted probabilistic inference in relational models. Many of these methods choose an order to eliminate (or branch on) the parameterized random variables. Similar to such methods for non-relational probabilistic inference, the order of elimination has a significant role in the performance of the algorithms. Since finding the best order is NP-complete even for non-relational models, heuristics have been proposed to find good orderings in the non-relational models. In this paper, we show that these heuristics are inefficient for relational models, because they fail to consider the population sizes associated with logical variables. We extend existing heuristics for non-relational models and propose new heuristics for relational models. We evaluate the existing and new heuristics on a range of generated relational graphs.

KR Conference 2014 Conference Paper

Relational Logistic Regression

  • Seyed Mehran Kazemi
  • David Buchman
  • Kristian Kersting
  • Sriraam Natarajan
  • David Poole

crime (which depends on how many other people could have committed the crime) (Poole 2003) we could consider the population to be the population of the neighbourhood, the population of the city, the population of the country, or the population of the whole world. It would be good to have a model that does not depend on this arbitrary decision. We would like to be able to compare models which involve different choices. • The population can change. For example, the number of people in a neighbourhood or in a school class may change. We would like a model to make reasonable predictions as the population changes. We would also like to be able to apply a model learned at one or a number of population sizes to different population sizes. For example, models from drug studies are acquired from very limited populations but are applied much more generally. • The relevant populations can be different for each individual. For example, the happiness of a person may depend on how many of her friends are kind (and how many are not kind). The set of friends is different for each individual. We would like a model that makes reasonable predictions for diverse numbers of friends. Logistic regression is a commonly used representation for aggregators in Bayesian belief networks when a child has multiple parents. In this paper we consider extending logistic regression to relational models, where we want to model varying populations and interactions among parents. In this paper, we first examine the representational problems caused by population variation. We show how these problems arise even in simple cases with a single parametrized parent, and propose a linear relational logistic regression which we show can represent arbitrary linear (in population size) decision thresholds, whereas the traditional logistic regression cannot. Then we examine representing interactions among the parents of a child node, and representing non-linear dependency on population size. We propose a multi-parent relational logistic regression which can represent interactions among parents and arbitrary polynomial decision thresholds. Finally, we show how other well-known aggregators can be represented using this relational logistic regression.

IJCAI Conference 2013 Conference Paper

Cyclic Causal Models with Discrete Variables: Markov Chain Equilibrium Semantics and Sample Ordering

  • David Poole
  • Mark Crowley

We analyze the foundations of cyclic causal models for discrete variables, and compare structural equation models (SEMs) to an alternative semantics as the equilibrium (stationary) distribution of a Markov chain. We show under general conditions, discrete cyclic SEMs cannot have independent noise; even in the simplest case, cyclic structural equation models imply constraints on the noise. We give a formalization of an alternative Markov chain equilibrium semantics which requires not only the causal graph, but also a sample order. We show how the resulting equilibrium is a function of the sample ordering, both theoretically and empirically.

IJCAI Conference 2013 Conference Paper

Probabilistic Reasoning with Undefined Properties in Ontologically-Based Belief Networks

  • Chia-Li Kuo
  • David Buchman
  • Arzoo Katiyar
  • David Poole

This paper concerns building probabilistic models with an underlying ontology that defines the classes and properties used in the model. In particular, it considers the problem of reasoning with properties that may not always be defined. Furthermore, we may even be uncertain about whether a property is defined for a given individual. One approach is to explicitly add a value “undefined” to the range of random variables, forming extended belief networks; however, adding an extra value to a random variable’s range has a large computational overhead. In this paper, we propose an alternative, ontologically-based belief networks, where all properties are only used when they are defined, and we show how probabilistic reasoning can be carried out without explicitly using the value “undefined” during inference. We prove this is equivalent to reasoning with the corresponding extended belief network and empirically demonstrate that inference becomes more efficient.

AAAI Conference 2012 Conference Paper

A Search Algorithm for Latent Variable Models with Unbounded Domains

  • Michael Chiang
  • David Poole

This paper concerns learning and prediction with probabilistic models where the domain sizes of latent variables have no a priori upper-bound. Current approaches represent prior distributions over latent variables by stochastic processes such as the Dirichlet process, and rely on Monte Carlo sampling to estimate the model from data. We propose an alternative approach that searches over the domain size of latent variables, and allows arbitrary priors over the their domain sizes. We prove error bounds for expected probabilities, where the error bounds diminish with increasing search scope. The search algorithm can be truncated at any time. We empirically demonstrate the approach for topic modelling of text documents.

KR Conference 2010 Conference Paper

Towards a Logic of Feature-based Semantic Science Theories

  • David Poole

The aim of semantic science is to allow for the publications of ontologies, observation data, and hypotheses/theories. Hypotheses make predictions on data and on new cases. Those hypotheses that fit the available evidence are called theories. This paper considers how thoeries can be used for predictions in new cases. Theories are typically very narrow and not all of the inputs to a theory are observed, so to make predictions on a particular case, many theories need to be used. Without any global design, the available theories do not necessarily fit together nicely. This paper explains how theories can be combined into theory ensembles to make predictions on a particular case. This is needed to evaluate theories, and to make useful predictions. We motivate and give desiderata for theory ensembles for level 1, feature-based, semantic science, which assumes that the data and the theories can be described in terms of features (random variables). • Information is published using well defined ontologies (Smith 2003) to allow semantic interoperability. • Data is published (Fox et al. 2006; McGuinness et al. 2007) about observations of the world described using the vocabulary specified by the ontologies. • Scientists (and others) publish hypotheses that make predictions on data. These hypotheses make reference to ontologies. Hypotheses that fit the data are called theories. • New data can be used to evaluate the hypotheses that make predictions on that data. • Descriptions of competing theories can be used to devise experiments that will distinguish the theories.

IJCAI Conference 2009 Conference Paper

  • Jacek Kisynski
  • David Poole

As exact inference for first-order probabilistic graphical models at the propositional level can be formidably expensive, there is an ongoing effort to design efficient lifted inference algorithms for such models. This paper discusses directed first-order models that require an aggregation operator when a parent random variable is parameterized by logical variables that are not present in a child random variable. We introduce a new data structure, aggregation parfactors, to describe aggregation in directed first-order models. We show how to extend Milch et al. ’s C-FOVE algorithm to perform lifted inference in the presence of aggregation parfactors. We also show that there are cases where the polynomial time complexity (in the domain size of logical variables) of the C-FOVE algorithm can be reduced to logarithmic time complexity using aggregation parfactors.

AAAI Conference 2007 Conference Paper

Logical Generative Models for Probabilistic Reasoning about Existence, Roles and Identity

  • David Poole

In probabilistic reasoning, the problems of existence and identity are important to many different queries; for example, the probability that something that fits some description exists, the probability that some description refers to an object you know about or to a new object, or the probability that an object fulfils some role. Many interesting queries reduce to reasoning about the role of objects. Being able to talk about the existence of parts and sub-parts and the relationships between these parts, allows for probability distributions over complex descriptions. Rather than trying to define a new language, this paper shows how the integration of multiple objects, ontologies and roles can be achieved cleanly. This solves two main problems: reasoning about existence and identity while preserving the clarity principle that specifies that probabilities must be over well defined propositions, and the correspondence problem that means that we don’t need to search over all possible correspondences between objects said to exist and things in the world.

KR Conference 2004 Conference Paper

Qualitative Probabistic Matching with Hierarchical Descriptions

  • David Poole
  • Clinton Smyth

This paper is about decision making based on real-world descriptions of a domain. There are many domains where different people have described various parts of the world at different levels of abstraction (using more general or less general terms) and at different levels of detail (where objects may or may not be described in terms of their parts) and where models are also described at different levels of abstraction and detail. However, to make decisions we need to be able to reason about what models match particular instances. This paper describes the issues involved in such matching. This a the basis of a number of fielded systems that do qualitative-probabilistic matching of models and instances for real-world data.

IJCAI Conference 2003 Conference Paper

First-order probabilistic inference

  • David Poole

There have been many proposals for first-order belief networks (i. e. , where we quantify over individuals) but these typically only let us reason about the individuals that we know about. There are many instances where we have to quantify over all of the individuals in a population. When we do this the population size often matters and we need to reason about all of the members of the population (but not necessarily individually). This paper presents an algorithm to reason about multiple individuals, where we may know particular facts about some of them, but want to treat the others as a group. Combining unification with variable elimination lets us reason about classes of individuals without needing to ground out the theory.

NeurIPS Conference 2002 Conference Paper

Real-Time Monitoring of Complex Industrial Processes with Particle Filters

  • Rubén Morales-Menéndez
  • Nando de Freitas
  • David Poole

This paper discusses the application of particle filtering algorithms to fault diagnosis in complex industrial processes. We consider two ubiq- uitous processes: an industrial dryer and a level tank. For these appli- cations, we compared three particle filtering variants: standard parti- cle filtering, Rao-Blackwellised particle filtering and a version of Rao- Blackwellised particle filtering that does one-step look-ahead to select good sampling regions. We show that the overhead of the extra process- ing per particle of the more sophisticated methods is more than compen- sated by the decrease in error and variance.

IJCAI Conference 1999 Conference Paper

On the Role of Context-Specific Independence in Probabilistic Inference

  • Nevin L. Zhang
  • David Poole

Context-specific independence (CSI) refers to conditional independencies that are true only in specific contexts. It has been found useful in various inference algorithms for Bayesian networks. This paper studies the role of CSI in general. We provide a characterization of the computational leverages offered by CSI without referring to particular inference algorithms. We identify the issues that need to be addressed in order to exploit the leverages and show how those issues can be addressed. We also provide empirical evidence that demonstrates the usefulness of CSI.

IJCAI Conference 1997 Conference Paper

Probabilistic Partial Evaluation: Exploiting rule structure in probabilistic inference

  • David Poole

Bayesian belief networks have grown to prominence because they provide compact representations of many domains, and there are algorithms to exploit this compactness. The next step is to allow compact representations of the conditional probability tables of a variable given its parents. In this paper we present such a representation in terms of parent contexts and provide an algorithm that exploits this compactness. The representation is in terms of rules that provide conditional probabilities in different contexts. The algorithm is based on eliminating the variables not needed in an answer in turn. The operations for eliminating a variable correspond to a form of partial evaluation, where we are careful to maintain the probabilistic dependencies necessary for correct probabilistic inference. We show how this new method can exploit more structure than previous methods for structured belief network inference.

IJCAI Conference 1995 Conference Paper

Logic Programming for Robot Control

  • David Poole

This paper proposes logic programs as a specification for robot control. These provide a formal specification of what an agent should do depending on what it senses, and its previous sensory inputs and actions. We show how to axiomatise reactive agents, events as an interface between continuous and discrete time, and persistence, as well as axiomatising integration and differentiation over time (in terms of the limit of sums and differences). This specification need not be evaluated as a Prolog program; we use can the fact that it will be evaluated in time to get a more efficient agent. We give a detailed example of a nonholonomic maze travelling robot, where we use the same language to model both the agent and the environment. One of the main motivations for this work is that there is a clean interface between the logic programs here and the model of uncertainty embedded in probabilistic Horn abduction. This is one step towards building a decisiontheoretic planning system where the output of the planner is a plan suitable for actually controlling a robot.

IJCAI Conference 1991 Conference Paper

Representing Diagnostic Knowledge for Probabilistic Horn Abduction

  • David Poole

This paper presents a simple logical framework for abduction, with probabilities associated with hypotheses. The language is an extension to pure Prolog, and it has straight-forward implementations using branch and bound search with either logic-programming technology or ATMS technology. The main focus of this paper is arguing for a form of representational adequacy of this very simple system for diagnostic reasoning. It is shown how it can represent model-based knowledge, with and without faults, and with and without non-intermittency assumptions. It is also shown how this representation can represent any probabilistic knowledge representable in a Bayesian belief network.

IJCAI Conference 1989 Conference Paper

Normality and Faults in Logic-Based Diagnosis

  • David Poole

Is there one logical definition of diagnosis? In this paper I argue that the answer to this question is "no". This paper is about the pragmatics of using logic for diagnosis; we show how two popular proposals for using logic for diagnosis, (namely abductive and consistency-based approaches) can be used to solve diagnostic tasks. The cases with only knowledge about how normal components work (any deviation being an error) and where there are fault models (we try to find a covering of the observations) are considered as well as the continuum between. The result is that there are two fundamentally different, but equally powerful diagnostic paradigms. They require different knowledge about the world, and different ways to think about a domain. This result indicates that there may not be an axiomatisation of a domain that is independent of how the knowledge is to be used.

IJCAI Conference 1987 Conference Paper

Variables in Hypotheses

  • David Poole

In many applications we want to build systems which must test the consistency of some theory (or set of axioms). This problem is general to many applications, for example abduction, learning, default reasoning, diagnosis, and is examined here in the context of theory formation from a fixed set of possible hypotheses [PGA87, Poole86]. There is a problem which arises when we are generating theories that contain variables. Two solutions are examined, the first where we are only allowed to have ground instances in theories formed, and the second where we may have universally quantified variables in the theory. It is shown that for the second case that the solution of reverse Skolemisation is not adequate to solve the problem, nor is any naive pattern matcher A solution for both cases is outlined.