Arrow Research search

Author name cluster

John E. Laird

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

15 papers
2 author rows

Possible papers

15

AAAI Conference 2026 Conference Paper

Requirements for Aligned, Dynamic Resolution of Conflicts in Operational Constraints

  • Steven J. Jones
  • Robert E. Wray
  • John E. Laird

Deployed, autonomous AI systems must often evaluate multiple plausible courses of action (extended sequences of behavior) in novel or under-specified contexts. Despite extensive training, these systems will inevitably encounter scenarios where no available course of action fully satisfies all operational constraints (e.g., operating procedures, rules, laws, norms, and goals). To achieve goals in accordance with human expectations and values, agents must go beyond their trained policies and instead construct, evaluate, and justify candidate courses of action. These processes require contextual ``knowledge'' that may lie outside prior (policy) training. This paper characterizes requirements for agent decision making in these contexts. It also identifies the types of knowledge agents require to make decisions robust to agent goals and aligned with human expectations. Drawing on both analysis and empirical case studies, we examine how agents need to integrate normative, pragmatic, and situational understanding to select and then to pursue more aligned courses of action in complex, real-world environments.

AAAI Conference 2024 Conference Paper

Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis

  • James R. Kirk
  • Robert E. Wray
  • Peter Lindes
  • John E. Laird

Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach, STARS, that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The STARS approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.

AAAI Conference 2022 System Paper

A Demonstration of Compositional, Hierarchical Interactive Task Learning

  • Aaron Mininger
  • John E. Laird

We present a demonstration of the interactive task learning agent Rosie, where it learns the task of patrolling a simulated barracks environment through situated natural language instruction. In doing so, it builds a sizable task hierarchy composed of both innate and learned tasks, tasks formulated as achieving a goal or following a procedure, tasks with conditional branches and loops, and involving communicative and mental actions. Rosie is implemented in the Soar cognitive architecture, and represents tasks using a declarative task network which it compiles into procedural rules through chunking. This is key to allowing it to learn from a single training episode and generalize quickly.

YNIMG Journal 2021 Journal Article

Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains

  • Andrea Stocco
  • Catherine Sibert
  • Zoe Steine-Hanson
  • Natalie Koh
  • John E. Laird
  • Christian J. Lebiere
  • Paul Rosenbloom

The Common Model of Cognition (CMC) is a recently proposed, consensus architecture intended to capture decades of progress in cognitive science on modeling human and human-like intelligence. Because of the broad agreement around it and preliminary mappings of its components to specific brain areas, we hypothesized that the CMC could be a candidate model of the large-scale functional architecture of the human brain. To test this hypothesis, we analyzed functional MRI data from 200 participants and seven different tasks that cover a broad range of cognitive domains. The CMC components were identified with functionally homologous brain regions through canonical fMRI analysis, and their communication pathways were translated into predicted patterns of effective connectivity between regions. The resulting dynamic linear model was implemented and fitted using Dynamic Causal Modeling, and compared against six alternative brain architectures that had been previously proposed in the field of neuroscience (three hierarchical architectures and three hub-and-spoke architectures) using a Bayesian approach. The results show that, in all cases, the CMC vastly outperforms all other architectures, both within each domain and across all tasks. These findings suggest that a common set of architectural principles that could be used for artificial intelligence also underpins human brain function across multiple cognitive domains.

IJCAI Conference 2019 Conference Paper

Learning Hierarchical Symbolic Representations to Support Interactive Task Learning and Knowledge Transfer

  • James R. Kirk
  • John E. Laird

Interactive Task Learning (ITL) focuses on learning the definition of tasks through online natural language instruction in real time. Learning the correct grounded meaning of the instructions is difficult due to ambiguous words, lack of common ground, and the presence of distractors in the environment and the agent’s knowledge. We present a learning strategy embodied in an ITL agent that interactively learns in one shot the meaning of task concepts for 40 games and puzzles in ambiguous scenarios. Our approach learns hierarchical symbolic representations of task knowledge rather than learning a mapping directly from perceptual representations. These representations enable the agent to transfer and compose knowledge, analyze and debug multiple interpretations, and communicate efficiently with the teacher to resolve ambiguity. We evaluate the efficiency of the learning by examining the number of words required to teach tasks across cases of no transfer, positive transfer, and interference from prior tasks. Our results show that the agent can correctly generalize, disambiguate, and transfer concepts within variations in language descriptions and world representations of the same task, and across variations in different tasks.

IS Journal 2017 Journal Article

Interactive Task Learning

  • John E. Laird
  • Kevin Gluck
  • John Anderson
  • Kenneth D. Forbus
  • Odest Chadwicke Jenkins
  • Christian Lebiere
  • Dario Salvucci
  • Matthias Scheutz

This article presents a new research area called interactive task learning (ITL), in which an agent actively tries to learn not just how to perform a task better but the actual definition of a task through natural interaction with a human instructor while attempting to perform the task. The authors provide an analysis of desiderata for ITL systems, a review of related work, and a discussion of possible application areas for ITL systems.

IJCAI Conference 2003 Conference Paper

Behavior Bounding: Toward Effective Comparisons of Agents & Humans

  • Scott A. Wallace
  • John E. Laird

In this paper, we examine methods for comparing human and agent behavior. The results of such a comparison can be used to validate a computer model of human behavior, score a Turning test, or guide an intelligent tutoring system. We introduce behavior bounding, an automated model-based approach for behavior comparison. We identify how this approach can be used with both human and agent behavior. We demonstrate that it requires minimal human effort to use, and that it is efficient when working with complex agents. Finally, we show empirical results indicating that this approach is effective at identifying behavioral problems in certain types of agents and that it has superior performance when compared against two benchmarks.

AIJ Journal 1991 Journal Article

A preliminary analysis of the Soar architecture as a basis for general intelligence

  • Paul S. Rosenbloom
  • John E. Laird
  • Allen Newell
  • Robert McCarl

In this article we take a step towards providing an analysis of the Soar architecture as a basis for general intelligence. Included are discussions of the basic assumptions underlying the development of Soar, a description of Soar cast in terms of the theoretical idea of multiple levels of description, an example of Soar performing multi-column subtraction, and three analyses of Soar: its natural tasks, the sources of its power, and its scope and limits

AAAI Conference 1990 Conference Paper

Integrating, Execution, Planning, and Learning in Soar for External Environments

  • John E. Laird

Three key components of an autonomous intelligent system are planning, execution, and learning. This paper describes how the Soar architecture supports planning, execution, and learning in unpredictable and dynamic environments. The tight integration of these components provides reactive execution, hierarchical execution, interruption, on demand planning, and the conversion of deliberate planning to reaction. These capabilities are demonstrated on two robotic systems controlled by Soar, one using a Puma robot arm and an overhead camera, the second using a small mobile robot with an arm.

IJCAI Conference 1987 Conference Paper

Learning General Search Control from Outside Guidance

  • Andrew Golding
  • Paul S. Rosenbloom
  • John E. Laird

The system presented here shows how Soar, an architecture for general problem solving and learning, can acquire general search-control knowledge from outside guidance. The guidance can be either direct advice about what the system should do, or a problem that illustrates a relevant idea. The system makes use of the guidance by first formulating an appropriate goal for itself. In the process of achieving this goal, it learns general search-control chunks. In the case of learning from direct advice, the goal is to verify that the advice is correct. The verification allows the system to obtain general conditions of applicability of the advice, and to protect itself from erroneous advice. The system learns from illustrative problems by setting the goal of solving the problem provided. It can then transfer the lessons it learns along the way to its original problem. This transfer constitutes a rudimentary form of analogy.

AIJ Journal 1987 Journal Article

SOAR: An architecture for general intelligence

  • John E. Laird
  • Allen Newell
  • Paul S. Rosenbloom

The ultimate goal of work in cognitive architecture is to provide the foundation for a system capable of general intelligent behavior. That is, the goal is to provide the underlying structure that would enable a system to perform the full range of cognitive tasks, employ the full range of problem solving methods and representations appropriate for the tasks, and learn about all aspects of the tasks and its performance on them. In this article we present SOAR, an implemented proposal for such an architecture. We describe its organizational principles, the system as currently implemented, and demonstrations of its capabilities.

AAAI Conference 1984 Conference Paper

Towards Chunking as a General Leaming Mechanism John E. Laird, Paul S. Rosenbloom, and Allen Newell, Carnegie Mellon University

  • John E. Laird

Chunks have long been proposed as a basic organizational unit for human memory. More recently chunks have been used to model human learning on simple perceptual-motor skills. In this paper we describe recent progress in extending chunking to be a general learning mechanism by implementing it within a general problem solver. Using the Soar problem-solving architecture, we take significant steps toward a general problem solver that can learn about all aspects of its behavior. We demonstrate chunking in Soar on three tasks: the Eight Puzzle, Tic-Tat-Toe, and a part of the RI computer-configuration task. Not only is there improvement with practice, but chunking also produces significant transfer of learned behavior, and strategy acquisition.