Arrow Research search

Author name cluster

Paul Dagum

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

10 papers
2 author rows

Possible papers

10

AIJ Journal 1997 Journal Article

An optimal approximation algorithm for Bayesian inference

  • Paul Dagum
  • Michael Luby

Approximating the inference probability Pr[X = x | E = e] in any sense, even for a single evidence node E, is NP-hard. This result holds for belief networks that are allowed to contain extreme conditional probabilities—that is, conditional probabilities arbitrarily close to 0. Nevertheless, all previous approximation algorithms have failed to approximate efficiently many inferences, even for belief networks without extreme conditional probabilities. We prove that we can approximate efficiently probabilistic inference in belief networks without extreme conditional probabilities. We construct a randomized approximation algorithm—the bounded-variance algorithm—that is a variant of the known likelihood-weighting algorithm. The bounded-variance algorithm is the first algorithm with provably fast inference approximation on all belief networks without extreme conditional probabilities. From the bounded-variance algorithm, we construct a deterministic approximation algorithm using current advances in the theory of pseudorandom generators. In contrast to the exponential worst-case behavior of all previous deterministic approximations, the deterministic bounded-variance algorithm approximates inference probabilities in worst-case time that is subexponential 2(log n) d, for some integer d that is a linear function of the depth of the belief network.

UAI Conference 1996 Conference Paper

Optimal Monte Carlo Estimation of Belief Network Inference

  • Malcolm Pradhan
  • Paul Dagum

We present two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomial-time convergence for a larger class of network than current sampling algorithms provide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in the theory of optimal stopping rules for Monte Carlo simulation to obtain an inference approximation with relative error epsilon and a small failure probability delta. We present an empirical evaluation of the algorithms which demonstrates their improved performance.

FOCS Conference 1995 Conference Paper

An Optimal Algorithm for Monte Carlo Estimation (Extended Abstract)

  • Paul Dagum
  • Richard M. Karp
  • Michael Luby
  • Sheldon M. Ross

A typical approach to estimate an unknown quantity /spl mu/ is to design an experiment that produces a random variable Z distributed in [O, 1] with E[Z]=/spl mu/, run this experiment independently a number of times and use the average of the outcomes as the estimate. In this paper, we consider the case when no a priori information about Z is known except that is distributed in [0, 1]. We describe an approximation algorithm AA which, given /spl epsiv/ and /spl delta/, when running independent experiments with respect to any Z, produces an estimate that is within a factor 1+/spl epsiv/ of /spl mu/ with probability at least 1-/spl delta/. We prove that the expected number of experiments ran by AA (which depends on Z) is optimal to within a constant factor for every Z.

UAI Conference 1993 Conference Paper

Additive Belief-Network Models

  • Paul Dagum
  • Adam Galper

The inherent intractability of probabilistic inference has hindered the application of belief networks to large domains. Noisy OR-gates [30] and probabilistic similarity networks [18, 17] escape the complexity of inference by restricting model expressiveness. Recent work in the application of belief-network models to time-series analysis and forecasting [9, 10] has given rise to the additive belief network model (ABNM). We (1) discuss the nature and implications of the approximations made by an additive decomposition of a belief network, (2) show greater efficiency in the induction of additive models when available data are scarce, (3) generalize probabilistic inference algorithms to exploit the additive decomposition of ABNMs, (4) show greater efficiency of inference, and (5) compare results on inference with a simple additive belief network.

AIJ Journal 1993 Journal Article

Approximating probabilistic inference in Bayesian belief networks is NP-hard

  • Paul Dagum
  • Michael Luby

It is known that exact computation of conditional probabilities in belief networks is NP-hard. Many investigators in the AI community have tacitly assumed that algorithms for performing approximate inference with belief networks are of polynomial complexity. Indeed, special cases of approximate inference can be performed in time polynomial in the input size. However, we have discovered that the general problem of approximating conditional probabilities with belief networks, like exact inference, resides in the NP-hard complexity class. We develop a complexity analysis to elucidate the difficulty of approximate probabilistic inference. More specifically, we show that the existence of a polynomial-time relative approximation algorithm for major classes of problem instances implies that NP ⊆ P. We present our proof and explore the implications of the result.

UAI Conference 1993 Conference Paper

Forecasting Sleep Apnea with Dynamic Network Models

  • Paul Dagum
  • Adam Galper

Dynamic network models (DNMs) are belief networks for temporal reasoning. The DNM methodology combines techniques from time series analysis and probabilistic reasoning to provide (1) a knowledge representation that integrates noncontemporaneous and contemporaneous dependencies and (2) methods for iteratively refining these dependencies in response to the effects of exogenous influences. We use belief-network inference algorithms to perform forecasting, control, and discrete event simulation on DNMs. The belief network formulation allows us to move beyond the traditional assumptions of linearity in the relationships among time-dependent variables and of normality in their probability distributions. We demonstrate the DNM methodology on an important forecasting problem in medicine. We conclude with a discussion of how the methodology addresses several limitations found in traditional time series analyses.

TCS Journal 1992 Journal Article

Approximating the permanent of graphs with large factors

  • Paul Dagum
  • Michael Luby

Let G=(U, V, E) be a bipartite graph with |U|=|V|=n. The factor size of G, f, is the maximum number of edge disjoint perfect matchings in G. We characterize the complexity of counting the number of perfect matchings in classes of graphs parameterized by factor size. We describe the simple algorithm, which is an approximation algorithm for the permanent that is a natural simplification of the algorithm suggested by Broder (1986) and analyzed by Jerrum and Sinclair (1988a, b). Compared to the algorithm by Jerrum and Sinclair (1988a, b), the simple algorithm achieves a polynomial speed up in the running time to compute the permanent. A combinatorial lemma is used to prove that the simple algorithm runs in time n O(n/f). Thus: (1) for all constants α>0, the simple algorithm runs in polynomial time for graphs with factor size at least αn; (2) for some constant c, the simple algorithm is the fastest known approximation for graphs with factor size at least c log n. (Compare with the approximation algorithms described in Karmarkar (1988).) We prove the following complementary hardness results. For functions f such that 3⩽f(n)⩽n−3, the exact counting problem for f(n)-regular bipartite graphs is #P-complete. For and ε>0, for any function f such that 3⩽f(n)⩽n 1−ε, approximate counting for f(n)-regular bipartite graphs is as hard as approximate counting for all bipartite graphs. An announcement of these results appears in Dagum (1988).

UAI Conference 1992 Conference Paper

Dynamic Network Models for Forecasting

  • Paul Dagum
  • Adam Galper
  • Eric Horvitz

We have developed a probabilistic forecasting methodology through a synthesis of belief network models and classical time-series analysis. We present the dynamic network model (DNM) and describe methods for constructing, refining, and performing inference with this representation of temporal probabilistic knowledge. The DNM representation extends static belief-network models to more general dynamic forecasting models by integrating and iteratively refining contemporaneous and time-lagged dependencies. We discuss key concepts in terms of a model for forecasting U.S. car sales in Japan.

UAI Conference 1992 Conference Paper

Reformulating Inference Problems Through Selective Conditioning

  • Paul Dagum
  • Eric Horvitz

We describe how we selectively reformulate portions of a belief network that pose difficulties for solution with a stochastic-simulation algorithm. With employ the selective conditioning approach to target specific nodes in a belief network for decomposition, based on the contribution the nodes make to the tractability of stochastic simulation. We review previous work on BNRAS algorithms- randomized approximation algorithms for probabilistic inference. We show how selective conditioning can be employed to reformulate a single BNRAS problem into multiple tractable BNRAS simulation problems. We discuss how we can use another simulation algorithm-logic sampling-to solve a component of the inference problem that provides a means for knitting the solutions of individual subproblems into a final result. Finally, we analyze tradeoffs among the computational subtasks associated with the selective conditioning approach to reformulation.

FOCS Conference 1988 Conference Paper

Polytopes, Permanents and Graphs with Large Factors

  • Paul Dagum
  • Michael Luby
  • Milena Mihail
  • Umesh V. Vazirani

Randomized algorithms for approximating the number of perfect matchings in a graph are considered. An algorithm that is a natural simplification of one suggested and analyzed previously is introduced and analyzed. One of the key ideas is to view the analysis from a geometric perspective: it is proved that for any graph G the k-slice of the well-known Edmonds matching polytope has magnification 1. For a bipartite graph G=(U, V, E), mod U mod = mod V mod =n, with d edge-disjoint perfect matchings, it is proved that the ratio of the number of almost perfect matchings to the number of perfect matchings is at most n/sup 3n/d/. For any constant alpha >0 this yields a a fully polynomial randomized algorithm for approximating the number of perfect matchings in bipartite graphs with d>or= alpha n. Moreover, for some constant c>0 it is the fastest known approximation algorithm for bipartite graphs with d>or= clog n. >