Arrow Research search

Author name cluster

Nate Derbinsky

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

10 papers
1 author row

Possible papers

10

AAAI Conference 2018 Conference Paper

Model AI Assignments 2018

  • Todd Neller
  • Zack Butler
  • Nate Derbinsky
  • Heidi Furey
  • Fred Martin
  • Michael Guerzhoy
  • Ariel Anders
  • Joshua Eckroth

The Model AI Assignments session seeks to gather and disseminate the best assignment designs of the Artificial Intelligence (AI) Education community. Recognizing that assignments form the core of student learning ex- perience, we here present abstracts of seven AI assign- ments from the 2018 session that are easily adoptable, playfully engaging, and flexible for a variety of instruc- tor needs.

AAAI Conference 2016 Conference Paper

A Comparison of Supervised Learning Algorithms for Telerobotic Control Using Electromyography Signals

  • Tyler Frasca
  • Antonio Sestito
  • Craig Versek
  • Douglas Dow
  • Barry Husowitz
  • Nate Derbinsky

Human Computer Interaction (HCI) is central for many applications, including hazardous environment inspection and telemedicine. Whereas traditional methods of HCI for teleoperating electromechanical systems include joysticks, levers, or buttons, our research focuses on using electromyography (EMG) signals to improve intuition and response time. An important challenge is to accurately and efficiently extract and map EMG signals to known position for real-time control. In this preliminary work, we compare the accuracy and real-time performance of several machine-learning techniques for recognizing specific arm positions. We present results from offline analysis, as well as end-to-end operation using a robotic arm.

AAAI Conference 2015 Conference Paper

Proximal Operators for Multi-Agent Path Planning

  • Jose Bento
  • Nate Derbinsky
  • Charles Mathy
  • Jonathan Yedidia

We address the problem of planning collision-free paths for multiple agents using optimization methods known as proximal algorithms. Recently this approach was explored in Bento et al. (2013), which demonstrated its ease of parallelization and decentralization, the speed with which the algorithms generate good quality solutions, and its ability to incorporate different proximal operators, each ensuring that paths satisfy a desired property. Unfortunately, the operators derived only apply to paths in 2D and require that any intermediate waypoints we might want agents to follow be preassigned to specific agents, limiting their range of applicability. In this paper we resolve these limitations. We introduce new operators to deal with agents moving in arbitrary dimensions that are faster to compute than their 2D predecessors and we introduce landmarks, spacetime positions that are automatically assigned to the set of agents under different optimality criteria. Finally, we report the performance of the new operators in several numerical experiments.

AAAI Conference 2015 Conference Paper

The Boundary Forest Algorithm for Online Supervised and Unsupervised Learning

  • Charles Mathy
  • Nate Derbinsky
  • Jose Bento
  • Jonathan Rosenthal
  • Jonathan Yedidia

We describe a new instance-based learning algorithm called the Boundary Forest (BF) algorithm, that can be used for supervised and unsupervised learning. The algorithm builds a forest of trees whose nodes store previously seen examples. It can be shown data points one at a time and updates itself incrementally, hence it is naturally online. Few instance-based algorithms have this property while being simultaneously fast, which the BF is. This is crucial for applications where one needs to respond to input data in real time. The number of children of each node is not set beforehand but obtained from the training procedure, which makes the algorithm very flexible with regards to what data manifolds it can learn. We test its generalization performance and speed on a range of benchmark datasets and detail in which settings it outperforms the state of the art. Empirically we find that training time scales as O(DNlog(N)) and testing as O(Dlog(N)), where D is the dimensionality and N the amount of data.

NeurIPS Conference 2013 Conference Paper

A message-passing algorithm for multi-agent trajectory planning

  • José Bento
  • Nate Derbinsky
  • Javier Alonso-Mora
  • Jonathan Yedidia

We describe a novel approach for computing collision-free \emph{global} trajectories for $p$ agents with specified initial and final configurations, based on an improved version of the alternating direction method of multipliers (ADMM) algorithm. Compared with existing methods, our approach is naturally parallelizable and allows for incorporating different cost functionals with only minor adjustments. We apply our method to classical challenging instances and observe that its computational requirements scale well with $p$ for several cost functionals. We also show that a specialization of our algorithm can be used for {\em local} motion planning by solving the problem of joint optimization in velocity space.

AAAI Conference 2012 Conference Paper

A Multi-Domain Evaluation of Scaling in a General Episodic Memory

  • Nate Derbinsky
  • Justin Li
  • John Laird

Episodic memory endows agents with numerous general cognitive capabilities, such as action modeling and virtual sensing. However, for long lived agents, there are numerous unexplored computational challenges in supporting useful episodic memory functions while maintaining real time reactivity. In this paper, we review the implementation of episodic memory in Soar and present an expansive evaluation of that system. We demonstrate useful applications of episodic memory across a variety of domains, including games, mobile robotics, planning, and linguistics. In these domains, we characterize properties of environments, tasks, and episodic cues that affect performance, and evaluate the ability of Soar’s episodic memory to support hours to days of real time operation.

AAMAS Conference 2012 Conference Paper

Algorithms for Scaling in a General Episodic Memory

  • Nate Derbinsky
  • Justin Li
  • John Laird

Episodic memory endows autonomous agents with useful cognitive capabilities. However, for long-lived agents, there are numerous unexplored computational challenges in supporting useful episodic-memory functions while maintaining real-time reactivity. This paper presents and summarizes the evaluation of an algorithmic variant to the task-independent episodic memory of Soar that expands the class of tasks and cues the mechanism can support while remaining reactive over long agent lifetimes.

AAAI Conference 2012 Conference Paper

Functional Interactions Between Memory and Recognition Judgments

  • Justin Li
  • Nate Derbinsky
  • John Laird

One issue facing agents that accumulate large bodies of knowledge is determining whether they have knowledge that is relevant to its current goals. Performing comprehensive searches of long-term memory in every situation can be computationally expensive and disruptive to task reasoning. In this paper, we demonstrate that the recognition judgment — a heuristic for whether memory structures have been previously perceived — can serve as a low-cost indicator of the existence of potentially relevant knowledge. We present an approach for computing both context-dependent and contextindependent recognition judgments using processes and data shared with declarative memories. We then describe an initial, efficient implementation in the Soar cognitive architecture and evaluate our system in a word sense disambiguation task, showing that it reduces the number of memory searches without degrading agent performance.

AAAI Conference 2011 Conference Paper

A Functional Analysis of Historical Memory Retrieval Bias in the Word Sense Disambiguation Task

  • Nate Derbinsky
  • John Laird

Effective access to knowledge within large declarative memory stores is one challenge in the development and understanding of long-living, generally intelligent agents. We focus on a sub-component of this problem: given a large store of knowledge, how should an agent's task-independent memory mechanism respond to an ambiguous cue, one that pertains to multiple previously encoded memories. A large body of cognitive modeling work suggests that human memory retrievals are biased in part by the recency and frequency of past memory access. In this paper, we evaluate the functional benefit of a set of memory retrieval heuristics that incorporate these biases, in the context of the word sense disambiguation task, in which an agent must identify the most appropriate word meaning in response to an ambiguous linguistic cue. In addition, we develop methods to integrate these retrieval biases within a task-independent declarative memory system implemented in the Soar cognitive architecture and evaluate their effectiveness and efficiency in three commonly used semantic concordances.