Arrow Research search

Author name cluster

Robert E. Wray

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
1 author row

Possible papers

6

AAAI Conference 2026 Conference Paper

Requirements for Aligned, Dynamic Resolution of Conflicts in Operational Constraints

  • Steven J. Jones
  • Robert E. Wray
  • John E. Laird

Deployed, autonomous AI systems must often evaluate multiple plausible courses of action (extended sequences of behavior) in novel or under-specified contexts. Despite extensive training, these systems will inevitably encounter scenarios where no available course of action fully satisfies all operational constraints (e.g., operating procedures, rules, laws, norms, and goals). To achieve goals in accordance with human expectations and values, agents must go beyond their trained policies and instead construct, evaluate, and justify candidate courses of action. These processes require contextual ``knowledge'' that may lie outside prior (policy) training. This paper characterizes requirements for agent decision making in these contexts. It also identifies the types of knowledge agents require to make decisions robust to agent goals and aligned with human expectations. Drawing on both analysis and empirical case studies, we examine how agents need to integrate normative, pragmatic, and situational understanding to select and then to pursue more aligned courses of action in complex, real-world environments.

AAAI Conference 2024 Conference Paper

Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis

  • James R. Kirk
  • Robert E. Wray
  • Peter Lindes
  • John E. Laird

Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach, STARS, that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The STARS approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.

IS Journal 2017 Journal Article

Interactive Task Learning

  • John E. Laird
  • Kevin Gluck
  • John Anderson
  • Kenneth D. Forbus
  • Odest Chadwicke Jenkins
  • Christian Lebiere
  • Dario Salvucci
  • Matthias Scheutz

This article presents a new research area called interactive task learning (ITL), in which an agent actively tries to learn not just how to perform a task better but the actual definition of a task through natural interaction with a human instructor while attempting to perform the task. The authors provide an analysis of desiderata for ITL systems, a review of related work, and a discussion of possible application areas for ITL systems.

AAAI Conference 1998 Conference Paper

Maintaining Consistency in Hierarchical Reasoning

  • Robert E. Wray

We explore techniques for maintaining consistency in reasoning when employing dynamic hierarchical task decompositions. In particular, we consider the difficulty of maintaining consistency when an agent nonmonotonically modifies an assumption in one level of the task hierarchy and that assumption depends upon potentially dynamic assertions higher in the hierarchy. The hypothesis of our work is that reasoning maintenance can be extended to hierarchical systems such that consistency is maintained across all levels of the hierarchy. We introduce two novel extensions to standard reason mamtenance approaches, assumptzon justification and dynamac hierarchical justification, both of which provide the necessary capabilities. The key difference between the two methods is whether a particular assumption (assumption justification) or an entire level of the hierarchy (dynamic hierarchical justification) is disabled when an inconsistency is found. Our investigations suggest that dynamic hierarchical justification has advantages over assumption justification, especially when the task decomposition is wellconstructed. Agents using dynamic hierarchical justification also compare favorably to agents using less complete methods for reasoning consistency, improving the reactivity of hierarchical architectures while eliminating the need for knowledge that otherwise would be required to maintain reasoning consistency.