Arrow Research search

Author name cluster

John W. Sheppard

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAAI Conference 2025 Conference Paper

Adaptive Sampling to Reduce Epistemic Uncertainty Using Prediction Interval-Generation Neural Networks

  • Giorgio Morales
  • John W. Sheppard

Obtaining high certainty in predictive models is crucial for making informed and trustworthy decisions in many scientific and engineering domains. However, extensive experimentation required for model accuracy can be both costly and time-consuming. This paper presents an adaptive sampling approach designed to reduce epistemic uncertainty in predictive models. Our primary contribution is the development of a metric that estimates potential epistemic uncertainty leveraging prediction interval-generation neural networks. This estimation relies on the distance between the predicted upper and lower bounds and the observed data at the tested positions and their neighboring points. Our second contribution is the proposal of a batch sampling strategy based on Gaussian processes (GPs). A GP is used as a surrogate model of the networks trained at each iteration of the adaptive sampling process. Using this GP, we design an acquisition function that selects a combination of sampling locations to maximize the reduction of epistemic uncertainty across the domain. We test our approach on three unidimensional synthetic problems and a multi-dimensional dataset based on an agricultural field for selecting experimental fertilizer rates. The results demonstrate that our method consistently converges faster to minimum epistemic uncertainty levels compared to Normalizing Flows Ensembles, MC-Dropout, and simple GPs.

UAI Conference 2015 Conference Paper

The Long-Run Behavior of Continuous Time Bayesian Networks

  • Liessman Sturlaugson
  • John W. Sheppard

The continuous time Bayesian network (CTBN) is a temporal model consisting of interdependent continuous time Markov chains (Markov processes). One common analysis performed on Markov processes is determining their longrun behavior, such as their stationary distributions. While the CTBN can be transformed into a single Markov process of all nodes’ state combinations, the size is exponential in the number of nodes, making traditional long-run analysis intractable. To address this, we show how to perform “long-run” node marginalization that removes a node’s conditional dependence while preserving its long-run behavior. This allows long-run analysis of CTBNs to be performed in a top-down process without dealing with the entire network all at once. 1 evidence, another useful type of query is to analyze a network’s long-run behavior, i. e. , the stationary distributions of a CTBN’s nodes. None of the previous CTBN inference algorithms were specifically designed to solve this problem. This paper presents the first inference algorithms for efficiently computing the stationary distribution of nodes in a CTBN. The paper is organized as follows. Section 2 provides the background for the rest of the paper. Section 3 gives the theory and algorithms for computing stationary distributions in CTBNs. In Section 4, we demonstrate the algorithms on three CTBNs. Section 5 contains the conclusion and future work. 2 BACKGROUND In this section, we begin by describing Markov processes and their long-run behavior. We then introduce the CTBN and discuss how combinations of nodes can be viewed as Markov processes.

UAI Conference 2014 Conference Paper

Inference Complexity in Continuous Time Bayesian Networks

  • Liessman Sturlaugson
  • John W. Sheppard

The continuous time Bayesian network (CTBN) enables temporal reasoning by representing a system as a factored, finite-state Markov process. The CTBN uses a traditional Bayesian network (BN) to specify the initial distribution. Thus, the complexity results of Bayesian networks also apply to CTBNs through this initial distribution. However, the question remains whether propagating the probabilities through time is, by itself, also a hard problem. We show that exact and approximate inference in continuous time Bayesian networks is NP-hard even when the initial states are given.