Arrow Research search

Author name cluster

Luis E. Ortiz

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
2 author rows

Possible papers

8

AAMAS Conference 2018 Conference Paper

Learning Game-theoretic Models from Aggregate Behavioral Data with Applications to Vaccination Rates in Public Health

  • Hau Chan
  • Luis E. Ortiz

In this paper, we undertake the challenging task of uncovering independencies of public-health behavioral data on populations’ vaccination rates collected by government officials in the United States. We use computational game theory to model such data as the result of distributed decision-making at the reported granularity level (e. g. , nations and states). To achieve our task, we posit the view of aggregated behavioral data as jointly randomized, or mixed, strategies of multiple agents. We propose a novel general machine-learning approach to learn game-theoretic models within a given hypothesis class of games from any potentially noisy dataset of mixed strategies. We illustrate our framework using publicly available data on vaccination rates in the continental USA.

UAI Conference 2012 Conference Paper

Interdependent Defense Games: Modeling Interdependent Security under Deliberate Attacks

  • Hau Chan
  • Michael Ceyko
  • Luis E. Ortiz

We propose interdependent defense (IDD) games, a computational game-theoretic framework to study aspects of the interdependence of risk and security in multi-agent systems under deliberate external attacks. Our model builds upon interdependent security (IDS) games, a model due to Heal and Kunreuther that considers the source of the risk to be the result of a fixed randomizedstrategy. We adapt IDS games to model the attacker’s deliberate behavior. We define the attacker’s pure-strategy space and utility function and derive appropriate cost functions for the defenders. We provide a complete characterization of mixed-strategy Nash equilibria (MSNE), and design a simple polynomial-time algorithm for computing all of them, for an important subclass of IDD games. In addition, we propose a randominstance generator of (general) IDD games based on a version of the real-world Internetderived Autonomous Systems (AS) graph (with around 27K nodes and 100K edges), and present promising empirical results using a simple learning heuristics to compute (approximate) MSNE in such games.

UAI Conference 2001 Conference Paper

Value-Directed Sampling Methods for POMDPs

  • Pascal Poupart
  • Luis E. Ortiz
  • Craig Boutilier

We consider the problem of approximate belief-state monitoring using particle filtering for the purposes of implementing a policy for a partially-observable Markov decision process (POMDP). While particle filtering has become a widely-used tool in AI for monitoring dynamical systems, rather scant attention has been paid to their use in the context of decision making. Assuming the existence of a value function, we derive error bounds on decision quality associated with filtering using importance sampling. We also describe an adaptive procedure that can be used to dynamically determine the number of samples required to meet specific error bounds. Empirical evidence is offered supporting this technique as a profitable means of directing sampling effort where it is needed to distinguish policies.

UAI Conference 2000 Conference Paper

Adaptive Importance Sampling for Estimation in Structured Domains

  • Luis E. Ortiz
  • Leslie Pack Kaelbling

Sampling is an important tool for estimating large, complex sums and integrals over high dimensional spaces. For instance, important sampling has been used as an alternative to exact methods for inference in belief networks. Ideally, we want to have a sampling distribution that provides optimal-variance estimators. In this paper, we present methods that improve the sampling distribution by systematically adapting it as we obtain information from the samples. We present a stochastic-gradient-descent method for sequentially updating the sampling distribution based on the direct minization of the variance. We also present other stochastic-gradient-descent methods based on the minimizationof typical notions of distance between the current sampling distribution and approximations of the target, optimal distribution. We finally validate and compare the different methods empirically by applying them to the problem of action evaluation in influence diagrams.

ICAPS Conference 2000 Conference Paper

Computing Global Strategies for Multi-Market Commodity Trading

  • Milos Hauskrecht
  • Luis E. Ortiz
  • Ioannis Tsochantaridis
  • Eli Upfal

The focus of this workis the computationof efficient strategies for commoditytrading in a multi-market environment. In today’s "global economy"commodities are often bought in one location and then sold (right away, or after somestorage period) in different markets. Thus, a trading decision in one location must be based on expectations about future price curves in all other relevant markets, and on current and future storage and transportation costs. Investors try to compute a strategy that maximizesexpected return, usually with somelimitations on assumedrisk. With standard stochastic assumptions on commodity price fluctuations, computingan optimal strategy can be modeled as a Markovdecision process (MDP). However, in general such a formulation does not lead to efficient algorithms. In this work we propose a modelfor representing the multi-market trading problem and showhowto obtain efficient structured algorithms for computingoptimal strategies for a numberof commonlyused trading objective functions (Expected NPV, Mean-Variance, and Value at Risk).

AAAI Conference 2000 Conference Paper

Sampling Methods for Action Selection in Influence Diagrams

  • Luis E. Ortiz

Sampling has become an important strategy for inference in belief networks. It can also be applied to the problem of selecting actions in influence diagrams. In this paper, we present methods with probabilistic guarantees of selecting a near-optimal action. We establish bounds on the number of samples required for the traditional method of estimating the utilities of the actions, then go on to extend the traditional method based on ideas from sequential analysis, generating a method requiring fewer samples. Finally, we exploit the intuition that equally good value estimates for each action are not required, to develop a heuristic method that achieves major reductions in required sample size. The heuristic method is validated empirically.

UAI Conference 1999 Conference Paper

Accelerating EM: An Empirical Study

  • Luis E. Ortiz
  • Leslie Pack Kaelbling

Many applications require that we learn the parameters of a model from data. EM is a method used to learn the parameters of probabilistic models for which the data for some of the variables in the models is either missing or hidden. There are instances in which this method is slow to converge. Therefore, several accelerations have been proposed to improve the method. None of the proposed acceleration methods are theoretically dominant and experimental comparisons are lacking. In this paper, we present the different proposed accelerations and try to compare them experimentally. From the results of the experiments, we argue that some acceleration of EM is always possible, but that which acceleration is superior depends on properties of the problem.