Arrow Research search

Author name cluster

Michael Wellman

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

13 papers
1 author row

Possible papers

13

AAAI Conference 2020 Conference Paper

Generating Realistic Stock Market Order Streams

  • Junyi Li
  • Xintong Wang
  • Yaoyang Lin
  • Arunesh Sinha
  • Michael Wellman

We propose an approach to generate realistic and high- fidelity stock market data based on generative adversarial networks (GANs). Our Stock-GAN model employs a conditional Wasserstein GAN to capture history dependence of orders. The generator design includes specially crafted aspects including components that approximate the market’s auction mechanism, augmenting the order history with order-book constructions to improve the generation task. We perform an ablation study to verify the usefulness of aspects of our network structure. We provide a mathematical characterization of distribution learned by the generator. We also propose statistics to measure the quality of generated orders. We test our approach with synthetic and actual market data, compare to many baseline generative models, and find the generated data to be close to real data.

AAAI Conference 2020 Conference Paper

Structure Learning for Approximate Solution of Many-Player Games

  • Zun Li
  • Michael Wellman

Games with many players are difficult to solve or even specify without adopting structural assumptions that enable representation in compact form. Such structure is generally not given and will not hold exactly for particular games of interest. We introduce an iterative structure-learning approach to search for approximate solutions of many-player games, assuming only black-box simulation access to noisy payoff samples. Our first algorithm, K-Roles, exploits symmetry by learning a role assignment for players of the game through unsupervised learning (clustering) methods. Our second algorithm, G3L, seeks sparsity by greedy search over local interactions to learn a graphical game model. Both algorithms use supervised learning (regression) to fit payoff values to the learned structures, in compact representations that facilitate equilibrium calculation. We experimentally demonstrate the efficacy of both methods in reaching quality solutions and uncovering hidden structure, on both perfectly and approximately structured game instances.

AAAI Conference 2018 Conference Paper

A Regression Approach for Modeling Games With Many Symmetric Players

  • Bryce Wiedenbeck
  • Fengjun Yang
  • Michael Wellman

We exploit player symmetry to formulate the representation of large normal-form games as a regression task. This formulation allows arbitrary regression methods to be employed in in estimating utility functions from a small subset of the game's outcomes. We demonstrate the applicability both neural networks and Gaussian process regression, but focus on the latter. Once utility functions are learned, computing Nash equilibria requires estimating expected payoffs of pure-strategy deviations from mixed-strategy profiles. Computing these expectations exactly requires an infeasible sum over the full payoff matrix, so we propose and test several approximation methods. Three of these are simple and generic, applicable to any regression method and games with any number of player roles. However, the best performance is achieved by a continuous integral that approximates the summation, which we formulate for the specific case of fully-symmetric games learned by Gaussian process regression with a radial basis function kernel. We demonstrate experimentally that the combination of learned utility functions and expected payoff estimation allows us to efficiently identify approximate equilibria of large games using sparse payoff data.

AAMAS Conference 2012 Conference Paper

Learning and Predicting Dynamic Networked Behavior with Graphical Multiagent Models

  • Quang Duong
  • Michael Wellman
  • Satinder Singh
  • Michael Kearns

Factored models of multiagent systems address the complexity of joint behavior by exploiting locality in agent interactions. \emph{History-dependent graphical multiagent models} (hGMMs) further capture dynamics by conditioning behavior on history. The challenges of modeling real human behavior motivated us to extend the hGMM representation by distinguishing two types of agent interactions. This distinction opens the opportunity for learning dependence networks that are different from given graphical structures representing observed agent interactions. We propose a greedy algorithm for learning hGMMs from time-series data, inducing both graphical structure and parameters. Our empirical study employs human-subject experiment data for a dynamic consensus scenario, where agents on a network attempt to reach a unanimous vote. We show that the learned hGMMs directly expressing joint behavior outperform alternatives in predicting dynamic human voting behavior, and end-game vote results. Analysis of learned graphical structures reveals patterns of action dependence not directly reflected in the original experiment networks.

AAMAS Conference 2012 Conference Paper

Scaling Simulation-Based Game Analysis through Deviation-Preserving Reduction

  • Bryce Wiedenbeck
  • Michael Wellman

Multiagent simulation extends the reach of game-theoretic analysis to scenarios where payoff functions can be computed from implemented agent strategies. However this approach is limited by the exponential growth in game size relative to the number of agents. Player reductions allow us to construct games with a small number of players that approximate very large symmetric games. We introduce \emph{deviation-preserving reduction}, which generalizes and improves on existing methods by combining sensitivity to unilateral deviation with granular subsampling of the profile space. We evaluate our method on several classes of random games and show that deviation-preserving reduction performs better than prior methods at approximating full game equilibria.

AAMAS Conference 2010 Conference Paper

Agent-Based Analysis of Asset Pricing under Ambiguous Information

  • Ben-Alexander Cassell
  • Michael Wellman

In a representative agent model, the behavior of a socialsystem is described in terms of a single aggregate decisionmaker. Such models are popular in economic and financeresearch, largely due to their analytic tractability, but failto account for real-world agent heterogeneity. Agent-basedmodels naturally incorporate heterogeneity, but are seen ashard to generalize. We propose an empirical game-theoreticapproach to address this concern, and provide a case studyto demonstrate this approach.

AAAI Conference 2010 Conference Paper

Algorithms for Finding Approximate Formations in Games

  • Patrick Jordan
  • Michael Wellman

Many computational problems in game theory, such as finding Nash equilibria, are algorithmically hard to solve. This limitation forces analysts to limit attention to restricted subsets of the entire strategy space. We develop algorithms to identify rationally closed subsets of the strategy space under given size constraints. First, we modify an existing family of algorithms for rational closure in two-player games to compute a related rational closure concept, called formations, for n-player games. We then extend these algorithms to apply in cases where the utility function is partially specified, or there is a bound on the size of the restricted profile space. Finally, we evaluate the performance of these algorithms on a class of random games.

AAMAS Conference 2010 Conference Paper

History-Dependent Graphical Multiagent Models

  • Quang Duong
  • Michael Wellman
  • Satinder Singh
  • Yevgeniy Vorobeychik

A dynamic model of a multiagent system defines a probability distribution over possible system behaviors over time. Alternative representations for such models present tradeoffs in expressive power, and accuracy and cost for inferential tasks of interest. In a history-dependent representation, behavior at a given time is specified as a probabilistic function of some portion of system history. Models may be further distinguished based on whether they specify individualor joint behavior. Joint behavior models are more expressive, but in general grow exponentially in number of agents. Graphical multiagent models (GMMs) provide a more compact representation of joint behavior, when agent interactions exhibit some local structure. We extend GMMs tocondition on history, thus supporting inference about system dynamics. To evaluate this hGMM representation westudy a voting consensus scenario, where agents on a network attempt to reach a preferred unanimous vote through aprocess of smooth fictitious play. We induce hGMMs and individual behavior models from example traces, showing thatthe former provide better predictions, given limited historyinformation. These hGMMs also provide advantages for answering general inference queries compared to sampling thetrue generative model.

AAMAS Conference 2010 Conference Paper

Strategy Exploration in Empirical Games

  • Patrick Jordan
  • L. Julian Schvartzman
  • Michael Wellman

Empirical analyses of complex games necessarily focus ona restricted set of strategies, and thus the value of empirical game models depends on effective methods for selectively exploring a space of strategies. We formulate an iterative framework for strategy exploration, and experimentallyevaluate an array of generic exploration policies on threegames: one infinite game with known analytic solution, andtwo relatively large empirical games generated by simulation. Policies based on iteratively finding a beneficial deviation or best response to the minimum-regret profile amongpreviously explored strategies perform generally well on theprofile-regret measure, although we find that some stochastic introduction of suboptimal responses can often lead tomore effective exploration in early stages of the process. Anovel formation-based policy performs well on all measuresby producing low-regret approximate formations earlier thanthe deviation-based policies.

AAMAS Conference 2008 Conference Paper

Searching for Approximate Equilibria in Empirical Games

  • Patrick Jordan
  • Yevgeniy Vorobeychik
  • Michael Wellman

When exploring a game over a large strategy space, it may not be feasible or cost-effective to evaluate the payoff of every relevant strategy profile. For example, determining a profile payoff for a procedurally defined game may require Monte Carlo simulation or other costly computation. Analyzing such games poses a search problem, with the goal of identifying equilibrium profiles by evaluating payoffs of candidate solutions and potential deviations from those candidates. We propose two algorithms, applicable to distinct models of the search process. In the revealed-payoff model, each search step determines the exact payoff for a designated purestrategy profile. In the noisy-payoff model, a step draws a stochastic sample corresponding to such a payoff. We compare our algorithms to previous proposals from the literature for these two models, and demonstrate performance advantages.

AAMAS Conference 2008 Conference Paper

Selecting Strategies Using Empirical Game Models: An Experimental Analysis of Meta-Strategies

  • Christopher Kiekintveld
  • Michael Wellman

In many complex multi-agent domains it is impractical to compute exact analytic solutions. An alternate means of analysis applies computational tools to derive and analyze empirical game models. These models are noisy approximations, which raises questions about how to account for uncertainty when analyzing the model. We develop a novel experimental framework and apply it to benchmark meta-strategies – general algorithms for selecting strategies based on empirical game models. We demonstrate that modeling noise is important; a naïve approach that disregards noise and plays according to Nash equilibrium yields poor choices. We introduce three parameterized algorithms that factor noise into the analysis by predicting distributions of opponent play. As observation noise increases, rational players generally make less specific outcome predictions. Our comparison of the algorithms identifies logit equilibrium as the best method for making these predictions. Logit equilibrium incorporates a form of noisy decision-making by players. Our evidence shows that this is a robust method for approximating the effects of uncertainty in many contexts. This result has practical relevance for guiding analysis of empirical game models. It also offers an intriguing rationale for behavioral findings that logit equilibrium is a better predictor of human behavior than Nash equilibrium.

AAMAS Conference 2008 Conference Paper

Stochastic Search Methods for Nash Equilibrium Approximation in Simulation-Based Games

  • Yevgeniy Vorobeychik
  • Michael Wellman

We define the class of games called simulation-based games, in which the payoffs are available as an output of an oracle (simulator), rather than specified analytically or using a payoff matrix. We then describe a convergent algorithm based on a hierarchical application of simulated annealing for estimating Nash equilibria in simulation-based games with finite-dimensional strategy sets. Additionally, we present alternative algorithms for best response and Nash equilibrium estimation, with a particular focus on one-shot infinite games of incomplete information. Our experimental results demonstrate that all the approaches we introduce are efficacious, albeit some more so than others. We show, for example, that while iterative best response dynamics has relatively weak convergence guarantees, it outperforms our convergent method experimentally. Additionally, we provide considerable evidence that a method based on random search outperforms gradient descent in our setting.