Arrow Research search

Author name cluster

Kenneth Forbus

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

17 papers
1 author row

Possible papers

17

AAAI Conference 2018 Conference Paper

Action Recognition From Skeleton Data via Analogical Generalization Over Qualitative Representations

  • Kezhen Chen
  • Kenneth Forbus

Human action recognition remains a difficult problem for AI. Traditional machine learning techniques can have high recognition accuracy, but they are typically black boxes whose internal models are not inspectable and whose results are not explainable. This paper describes a new pipeline for recognizing human actions from skeleton data via analogical generalization. Specifically, starting with Kinect data, we segment each human action by temporal regions where the motion is qualitatively uniform, creating a sketch graph that provides a form of qualitative representation of the behavior that is easy to visualize. Models are learned from sketch graphs via analogical generalization, which are then used for classification via analogical retrieval. The retrieval process also produces links between the new example and components of the model that provide explanations. To improve recognition accuracy, we implement dynamic feature selection to pick reasonable relational features. We show the explanation advantage of our approach by example, and results on three public datasets illustrate its utility.

AAAI Conference 2018 Conference Paper

Learning From Unannotated QA Pairs to Analogically Disambiguate and Answer Questions

  • Maxwell Crouse
  • Clifton McFate
  • Kenneth Forbus

Creating systems that can learn to answer natural language questions has been a longstanding challenge for artificial intelligence. Most prior approaches focused on producing a specialized language system for a particular domain and dataset, and they required training on a large corpus manually annotated with logical forms. This paper introduces an analogy-based approach that instead adapts an existing general purpose semantic parser to answer questions in a novel domain by jointly learning disambiguation heuristics and query construction templates from purely textual question-answer pairs. Our technique uses possible semantic interpretations of the natural language questions and answers to constrain a querygeneration procedure, producing cases during training that are subsequently reused via analogical retrieval and composed to answer test questions. Bootstrapping an existing semantic parser in this way significantly reduces the number of training examples needed to accurately answer questions. We demonstrate the efficacy of our technique using the Geoquery corpus, on which it approaches state of the art performance using 10-fold cross validation, shows little decrease in performance with 2folds, and achieves above 50% accuracy with as few as 10 examples.

AAAI Conference 2018 Conference Paper

Sketch Worksheets in STEM Classrooms: Two Deployments

  • Kenneth Forbus
  • Bridget Garnier
  • Basil Tikoff
  • Wayne Marko
  • Madeline Usher
  • Matthew McLure

Sketching can be a valuable tool for science education, but it is currently underutilized. Sketch worksheets were developed to help change this, by using AI technology to give students immediate feedback and to give instructors assistance in grading. Sketch worksheets use visual representations automatically computed by CogSketch, which are combined with conceptual information from the OpenCyc ontology. Feedback is provided to students by comparing an instructor’s sketch to a student’s sketch, using the Structure-Mapping Engine. This paper describes our experiences in deploying sketch worksheets in two types of classes: Geoscience and AI. Sketch worksheets for introductory geoscience classes were developed by geoscientists at University of Wisconsin- Madison, authored using CogSketch and used in classes at both Wisconsin and Northwestern University. Sketch worksheets were also developed and deployed for a knowledge representation and reasoning course at Northwestern. Our experience indicates that sketch worksheets can provide helpful on-the-spot feedback to students, and significantly improve grading efficiency, to the point where sketching assignments can be more practical to use broadly in STEM education.

AAAI Conference 2017 Conference Paper

Analogical Chaining with Natural Language Instruction for Commonsense Reasoning

  • Joseph Blass
  • Kenneth Forbus

Understanding commonsense reasoning is one of the core challenges of AI. We are exploring an approach inspired by cognitive science, called analogical chaining, to create cognitive systems that can perform commonsense reasoning. Just as rules are chained in deductive systems, multiple analogies build upon each otherÕs inferences in analogical chaining. The cases used in analogical chaining Ð called common sense units Ð are small, to provide inferential focus and broader transfer. Importantly, such common sense units can be learned via natural language instruction, thereby increasing the ease of extending such systems. This paper describes analogical chaining, natural language instruction via microstories, and some subtleties that arise in controlling reasoning. The utility of this technique is demonstrated by performance of an implemented system on problems from the Choice of Plausible Alternatives test of commonsense causal reasoning.

AAAI Conference 2015 Conference Paper

Extending Analogical Generalization with Near-Misses

  • Matthew McLure
  • Scott Friedman
  • Kenneth Forbus

Concept learning is a central problem for cognitive systems. Generalization techniques can help organize examples by their commonalities, but comparisons with non-examples, near-misses, can provide discrimination. Early work on near-misses required hand-selected examples by a teacher who understood the learner’s internal representations. This paper introduces Analogical Learning by Integrating Generalization and Near-misses (ALIGN) and describes three key advances. First, domain-general cognitive models of analogical processes are used to handle a wider range of examples. Second, ALIGN’s analogical generalization process constructs multiple probabilistic representations per concept via clustering, and hence can learn disjunctive concepts. Finally, ALIGN uses unsupervised analogical retrieval to find its own near-miss examples. We show that ALIGN out-performs analogical generalization on two perceptual data sets: (1) hand-drawn sketches; and (2) geospatial concepts from strategy-game maps.

AAAI Conference 2015 Conference Paper

Learning Plausible Inferences from Semantic Web Knowledge by Combining Analogical Generalization with Structured Logistic Regression

  • Chen Liang
  • Kenneth Forbus

Fast and efficient learning over large bodies of commonsense knowledge is a key requirement for cognitive systems. Semantic web knowledge bases provide an important new resource of ground facts from which plausible inferences can be learned. This paper applies structured logistic regression with analogical generalization (SLogAn) to make use of structural as well as statistical information to achieve rapid and robust learning. SLogAn achieves state-of-the-art performance in a standard triplet classification task on two data sets and, in addition, can provide understandable explanations for its answers.

AAAI Conference 2015 Conference Paper

Moral Decision-Making by Analogy: Generalizations versus Exemplars

  • Joseph Blass
  • Kenneth Forbus

Moral reasoning is important to model accurately as AI systems become ever more integrated into our lives. Moral reasoning is rapid and unconscious; analogical reasoning, which can be unconscious, is a promising approach to model moral reasoning. This paper explores the use of analogical generalizations to improve moral reasoning. Analogical reasoning has already been used successfully to model moral reasoning in the MoralDM model, but it exhaustively matches across all known cases, which is computationally intractable and cognitively implausible for human-scale knowledge bases. We investigate the performance of an extension of MoralDM to use the MAC/FAC model of analogical retrieval over three conditions, across a set of highly confusable moral scenarios.

AAAI Conference 2014 Conference Paper

Using Narrative Function to Extract Qualitative Information from Natural Language Texts

  • Clifton McFate
  • Kenneth Forbus
  • Thomas Hinrichs

The naturalness of qualitative reasoning suggests that qualitative representations might be an important component of the semantics of natural language. Prior work showed that framebased representations of qualitative process theory constructs could indeed be extracted from natural language texts. That technique relied on the parser recognizing specific syntactic constructions, which had limited coverage. This paper describes a new approach, using narrative function to represent the higherorder relationships between the constituents of a sentence and between sentences in a discourse. We outline how narrative function combined with query-driven abduction enables the same kinds of information to be extracted from natural language texts. Moreover, we also show how the same technique can be used to extract type-level qualitative representations from text, and used to improve performance in playing a strategy game.

AAAI Conference 2013 Conference Paper

Automatic Extraction of Efficient Axiom Sets from Large Knowledge Bases

  • Abhishek Sharma
  • Kenneth Forbus

Efficient reasoning in large knowledge bases is an important problem for AI systems. Hand-optimization of reasoning becomes impractical as KBs grow, and impossible as knowledge is automatically added via knowledge capture or machine learning. This paper describes a method for automatic extraction of axioms for efficient inference over large knowledge bases, given a set of query types and information about the types of facts in the KB currently as well as what might be learned. We use the highly right skewed distribution of predicate connectivity in large knowledge bases to prune intractable regions of the search space. We show the efficacy of these techniques via experiments using queries from a learning by reading system. Results show that these methods lead to an order of magnitude improvement in time with minimal loss in coverage.

AAAI Conference 2013 Conference Paper

Graph Traversal Methods for Reasoning in Large Knowledge-Based Systems

  • Abhishek Sharma
  • Kenneth Forbus

Commonsense reasoning at scale is a core problem for cognitive systems. In this paper, we discuss two ways in which heuristic graph traversal methods can be used to generate plausible inference chains. First, we discuss how Cyc’s predicate-type hierarchy can be used to get reasonable answers to queries. Second, we explain how connection graph-based techniques can be used to identify script-like structures. Finally, we demonstrate through experiments that these methods lead to significant improvement in accuracy for both Q/A and script construction.

AAAI Conference 2012 Conference Paper

Learning Qualitative Models by Demonstration

  • Thomas Hinrichs
  • Kenneth Forbus

Creating software agents that learn interactively requires the ability to learn from a small number of trials, extracting general, flexible knowledge that can drive behavior from observation and interaction. We claim that qualitative models provide a useful intermediate level of causal representation for dynamic domains, including the formulation of strategies and tactics. We argue that qualitative models are quickly learnable, and enable model based reasoning techniques to be used to recognize, operationalize, and construct more strategic knowledge. This paper describes an approach to incrementally learning qualitative influences by demonstration in the context of a strategy game. We show how the learned model can help a system play by enabling it to explain which actions could contribute to maximizing a quantitative goal. We also show how reasoning about the model allows it to reformulate a learning problem to address delayed effects and credit assignment, such that it can improve its performance on more strategic tasks such as city placement.

AAAI Conference 2012 Conference Paper

Modeling the Evolution of Knowledge in Learning Systems

  • Abhishek Sharma
  • Kenneth Forbus

How do reasoning systems that learn evolve over time? What are the properties of different learning strategies? Characterizing the evolution of these systems is important for understanding their limitations and gaining insights into the interplay between learning and reasoning. We describe an inverse ablation model for studying how large knowledge-based systems evolve: Create a small knowledge base by ablating a large KB, and simulate learning by incrementally re-adding facts, using different strategies to simulate types of learners. For each iteration, reasoning properties (including number of questions answered and run time) are collected, to explore how learning strategies and reasoning interact. We describe several experiments with the inverse ablation model, examining how two different learning strategies perform. Our results suggest that different concepts show different rates of growth, and that the density and distribution of facts that can be learned are important parameters for modulating the rate of learning.

AAAI Conference 2011 Conference Paper

Analogical Dialogue Acts: Supporting Learning by Reading Analogies in Instructional Texts

  • David Barbella
  • Kenneth Forbus

Analogy is heavily used in instructional texts. We introduce the concept of analogical dialogue acts (ADAs), which represent the roles utterances play in instructional analogies. We describe a catalog of such acts, based on ideas from structure-mapping theory. We focus on the operations that these acts lead to while understanding instructional texts, using the Structure-Mapping Engine (SME) and dynamic case construction in a computational model. We test this model on a small corpus of instructional analogies expressed in simplified English, which were understood via a semi-automatic natural language system using analogical dialogue acts. The model enabled a system to answer questions after understanding the analogies that it was not able to answer without them.

AAAI Conference 2010 Conference Paper

An Integrated Systems Approach to Explanation-Based Conceptual Change

  • Scott Friedman
  • Kenneth Forbus

Understanding conceptual change is an important problem in modeling human cognition and in making integrated AI systems that can learn autonomously. This paper describes a model of explanation-based conceptual change, integrating sketch understanding, analogical processing, qualitative models, truth-maintenance, and heuristic-based reasoning within the Companions cognitive architecture. Sketch understanding is used to automatically encode stimuli in the form of comic strips. Qualitative models and conceptual quantities are constructed for new phenomena via analogical reasoning and heuristics. Truth-maintenance is used to integrate conceptual and episodic knowledge into explanations, and heuristics are used to modify existing conceptual knowledge in order to produce better explanations. We simulate the learning and revision of the concept of force, testing the concepts learned via a questionnaire of sketches given to students, showing that our model follows a similar learning trajectory.

IJCAI Conference 2007 Conference Paper

  • Andrew Lovett
  • Morteza Dehghani
  • Kenneth Forbus

Most existing sketch understanding systems require a closed domain to achieve recognition. This paper describes an incremental learning technique for open-domain recognition. Our system builds generalizations for categories of objects based upon previous sketches of those objects and uses those generalizations to classify new sketches. We represent sketches qualitatively because we believe qualitative information provides a level of description that abstracts away details that distract from classification, such as exact dimensions. Bayesian reasoning is used in building representations to deal with the inherent uncertainty in perception. Qualitative representations are compared using SME, a computational model of analogy and similarity that is supported by psychological evidence, including studies of perceptual similarity. We use SEQL to produce generalizations based on the common structure found by SME in different sketches of the same object. We report on the results of testing the system on a corpus of sketches of everyday objects, drawn by ten different people.

KER Journal 2005 Journal Article

Retrieval, reuse, revision and retention in case-based reasoning

  • Ramon Lopez de Mantaras
  • David McSherry
  • Derek Bridge
  • David Leake
  • Barry Smyth
  • Susan Craw
  • Boi Faltings
  • Mary Lou Maher

Case-based reasoning (CBR) is an approach to problem solving that emphasizes the role of prior experience during future problem solving (i.e., new problems are solved by reusing and if necessary adapting the solutions to similar problems that were solved in the past). It has enjoyed considerable success in a wide variety of problem solving tasks and domains. Following a brief overview of the traditional problem-solving cycle in CBR, we examine the cognitive science foundations of CBR and its relationship to analogical reasoning. We then review a representative selection of CBR research in the past few decades on aspects of retrieval, reuse, revision and retention.