Arrow Research search

Author name cluster

Ian Frank

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

JAAMAS Journal 2026 Journal Article

Performance Competitions as Research Infrastructure: Large Scale Comparative Studies of Multi-Agent Teams

  • Gal A. Kaminka
  • Ian Frank
  • Kumiko Tanaka-Ishii

Abstract Performance competitions (events that pit many different programs against each other on a standardized task) provide a way for a research community to promote research progress towards challenging goals. In this paper, we argue that for maximum research benefit, any such competition must involve comparative studies under closely controlled, varying conditions. We demonstrate the critical role of comparative studies in the context of one well-known and growing performance competition: the annual Robotic Soccer World Cup (RoboCup) Championship. Specifically, over the past three years, we have carried out annual large-scale comparative evaluations—distinct from the competition itself—of the multi-agent teams taking part in the largest RoboCup league. Our study, which involved 30 different teams of agents produced by dozens of different research groups, focused on robustness. We show that (i) multi-agent teams exhibit a clear performance-robustness tradeoff; (ii) teams tend to over-specialize, so that they cannot handle beneficial changes we make to their operating environment; and (iii) teams improve in performance more than in robustness from one year to the next, despite the emphasis by RoboCup organizers on robustness as a key challenge. These results demonstrate the potential of large-scale comparative studies for producing important results otherwise difficult to discover, and are significant both in the lessons they raise for designers of multi-agent teams, and in understanding the place of performance competitions within the multi-agent research infrastructure.

AAAI Conference 2000 Conference Paper

Combining Knowledge and Search to Solve Single-Suit Bridge

  • Ian Frank
  • Universität Freiburg; Alan Bundy

In problem solving, it is often important not only to find a solution but also to be able to explain it. We use the game of Bridge to illustrate how tactics, which formalise domain-specific expertise, can be used for both these tasks. Our Bridge tactics constrain search to the point where optimal strategies can quickly be identified, and also provide the key to explaining these strategies in human-understandable terms. We demonstrate this using a canonical set of singlesuit Bridge problems from a definitive expert text. FINESSE ‘solves’ these problems in the technical sense that, in addition to always finding optimal solutions (and revealing a 3% error rate in the expert answers), it also explains each solution in simple, clear English text.

AAAI Conference 1998 Conference Paper

Finding Optimal Strategies for Imperfect Information Games

  • Ian Frank

Weexaminethree heuristic algorithms for gameswith imperfect information: Monte-carlo sampling, and two newalgorithms wecall vector minimaxingand payoffreduction minimaxing. Wecomparethese algorithms theoretically and experimentally, using both simple gametrees and a large database of problemsfrom the game of Bridge. Our experiments show that the new algorithms both out-perform Monte-carlo sampling, with the superiority of payoff-reduction minimaxing being especially marked. Onthe Bridge problemset, for example, Monte-carlo sampling only solves 66% of the problems, whereas payoff-reduction minimaxing solves over 95%. This level of performance was evengoodenoughto allowus to discover five errors in the expert text used to generate the test database.