Arrow Research search

Author name cluster

Nicholas S. Flann

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

IROS Conference 2007 Conference Paper

Coordination of multiple vehicles for area coverage tasks

  • Garrett Dean Winward
  • Nicholas S. Flann

Area coverage operations such as plowing a field or mowing a lawn can be performed faster if multiple vehicles are involved. To use a team of automated vehicles safely and effectively they must be coordinated to avoid collisions and deadlock situations. Unexpected events may occur during the operation which may affect vehicles' velocities, so the coordination method must be robust with respect to these events. In this paper, a path coordination method is introduced which delays decisions about mission coordination as long as possible during mission execution so such unexpected situations are efficiently handled. The method's computation speed and solution quality are evaluated through simulation, and compared with two other methods based on common path coordination techniques.

ICRA Conference 1997 Conference Paper

Optimal route re-planning for mobile robots: a massively parallel incremental A* algorithm

  • Tao Ma
  • Amr Elssamadisy
  • Nicholas S. Flann
  • Ben Abbott

The principal advantage of incremental A* algorithms for precomputing and maintaining routes for mobile robotic vehicles is the completeness and optimality of the approach. However, the computational burden becomes unreasonable when large worlds are modeled or fine resolution is required, since the complexity is bound by the area modeled. This problem is compounded when multiple vehicles and multiple goals are involved, since routes to each goal from each vehicle must be maintained. This paper presents a massively parallel incremental A* algorithm suitable for implementing in VLSI. The number of iterations of the parallel algorithm is bound by the optimal path length, providing a significant speedup for large worlds. Empirical studies combined with a feasible VLSI design estimate that path calculations on a 1000 by 1000 world could be done in approximately 110 mS worse case.

AAAI Conference 1987 Conference Paper

Forward Chaining Logic Programming with the ATMS

  • Nicholas S. Flann

Two powerful reasoning tools have recently appeared, logic programming and assumption-based truth maintenance systems (ATMS). An ATMS offers significant advantages to a problem solver: assumptions are easily managed and the search for solutions can be carried out in the most general context first and in any order. Logic programming allows us to program a problem solver declaratively-describe what the problem is, rather than describe how to solve the problem. However, we are currently limited when using an ATMS with our problem solvers, because we are forced to describe the problem in terms of a simple language of forward implications. In this paper we present a logic programming language, called FORLOG, that raises the level of programming the ATMS to that of a powerful logic programming language. FORLOG supports the use of "logical variables" and both forward and backward reasoning. FORLOG programs are compiled into a data-flow language (similar to the RETE network) that efficiently implements deKleer’s consumer architecture. FORLOG has been implemented in Interlisp-D.

AAAI Conference 1986 Conference Paper

Selecting Appropriate Representations for Learning from Examples

  • Nicholas S. Flann

The task of inductive learning from examples places constraints on the representation of training instances and concepts. These constraints are different from, and often incompatible with, the constraints placed on the representation by the performance task. This incompatibility explains why previous researchers have found it so difficult to construct good representations for inductive learning-they were trying to achieve a compromise between these two sets of constraints. To address this problem, we have developed a learning system that employs two different representations: one for learning and one for performance. The learning system accepts training instances in the “ performance representation, ” converts them into a “ learning representation” where they are inductively generalized, and then maps the learned concept back into the “ performance representation. ” The advantages of this approach are (a) many fewer training instances are required to learn the concept, (b) the biases of the learning program are very simple, and (c) the learning system requires virtually no ‘ vocabulary engineering” to learn concepts in a new domain.