Arrow Research search

Author name cluster

James Pustejovsky

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
2 author rows

Possible papers

9

AAAI Conference 2025 System Paper

Speech Is Not Enough: Interpreting Nonverbal Indicators of Common Knowledge and Engagement

  • Derek Palmer
  • Yifan Zhu
  • Kenneth Lai
  • Hannah VanderHoeven
  • Mariah Bradford
  • Ibrahim Khebour
  • Carlos Mabrey
  • Jack Fitzgerald

Our goal is to develop an AI Partner that can provide support for group problem solving and social dynamics. In multi-party working group environments, multimodal analytics is crucial for identifying non-verbal interactions of group members. In conjunction with their verbal participation, this creates an holistic understanding of collaboration and engagement that provides necessary context for the AI Partner. In this demo, we illustrate our present capabilities at detecting and tracking nonverbal behavior in student task-oriented interactions in the classroom, and the implications for tracking common ground and engagement.

AAAI Conference 2020 System Paper

Diana’s World: A Situated Multimodal Interactive Agent

  • Nikhil Krishnaswamy
  • Pradyumna Narayana
  • Rahul Bangar
  • Kyeongmin Rim
  • Dhruva Patil
  • David McNeely-White
  • Jaime Ruiz
  • Bruce Draper

State of the art unimodal dialogue agents lack some core aspects of peer-to-peer communication—the nonverbal and visual cues that are a fundamental aspect of human interaction. To facilitate true peer-to-peer communication with a computer, we present Diana, a situated multimodal agent who exists in a mixed-reality environment with a human interlocutor, is situation- and context-aware, and responds to the human’s language, gesture, and affect to complete collaborative tasks.

AAAI Conference 2019 Conference Paper

Combining Deep Learning and Qualitative Spatial Reasoning to Learn Complex Structures from Sparse Examples with Noise

  • Nikhil Krishnaswamy
  • Scott Friedman
  • James Pustejovsky

Many modern machine learning approaches require vast amounts of training data to learn new concepts; conversely, human learning often requires few examples—sometimes only one—from which the learner can abstract structural concepts. We present a novel approach to introducing new spatial structures to an AI agent, combining deep learning over qualitative spatial relations with various heuristic search algorithms. The agent extracts spatial relations from a sparse set of noisy examples of block-based structures, and trains convolutional and sequential models of those relation sets. To create novel examples of similar structures, the agent begins placing blocks on a virtual table, uses a CNN to predict the most similar complete example structure after each placement, an LSTM to predict the most likely set of remaining moves needed to complete it, and recommends one using heuristic search. We verify that the agent learned the concept by observing its virtual block-building activities, wherein it ranks each potential subsequent action toward building its learned concept. We empirically assess this approach with human participants’ ratings of the block structures. Initial results and qualitative evaluations of structures generated by the trained agent show where it has generalized concepts from the training data, which heuristics perform best within the search space, and how we might improve learning and execution.

IS Journal 2009 Journal Article

Digital Intuition: Applying Common Sense Using Dimensionality Reduction

  • Catherine Havasi
  • Robyn Speer
  • James Pustejovsky
  • Henry Lieberman

Understanding the world we live in requires access to a large amount of background knowledge: the commonsense knowledge that most people have and most computer systems don't. Many of the limitations of artificial intelligence today relate to the problem of acquiring and understanding common sense. The Open Mind Common Sense project began to collect common sense from volunteers on the Internet starting in 2000. The collected information is converted to a semantic network called ConceptNet. Reducing the dimensionality of ConceptNet's graph structure gives a matrix representation called AnalogySpace, which reveals large-scale patterns in the data, smoothes over noise, and predicts new knowledge. Extending this work, we have created a method that uses singular value decomposition to aid in the integration of systems or representations. This technique, called blending, can be harnessed to find and exploit correlations between different resources, enabling commonsense reasoning over a broader domain.

TIME Conference 2005 Invited Paper

Time and the Semantic Web

  • James Pustejovsky

In this paper we discuss the role that temporal information plays in natural language text, specifically in the context of enriching the semantics of Web texts and Web interactions. We present a language, TimeML, which attempts to capture the richness of temporal and event related information in language, while demonstrating how it can play an important part in the development of more robust semantic ontologies. Specifically, we propose to demonstrate how a TimeML markup of text is interpreted within the DAML-Time ontology and time framework of Hobbs (2002).

AAAI Conference 1987 Conference Paper

The Acquisition of Conceptual Structure for the Lexicon

  • James Pustejovsky

There has recently been a great deal of interest in the structure of the lexicon for natural language understanding and generation. One of the major problems encountered has been the optimal organization of the enormous amounts of lexical knowledge necessary for robust NLP systems. Modifying machine readable dictionaries into semantically organized networks, therefore, has become a major research interest. In this paper we propose a representation language for lexical information in dictionaries, and describe an interactive learning approach to this problem, making use of extensive knowledge of the domain being learned. We compare our model to existing systems designed for automatic classification of lexical knowledge.