Arrow Research search

Author name cluster

Eyal Amir

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

33 papers
2 author rows

Possible papers

33

IJCAI Conference 2015 Conference Paper

A Deterministic Partition Function Approximation for Exponential Random Graph Models

  • Wen Pu
  • Jaesik Choi
  • Yunseong Hwang
  • Eyal Amir

Exponential Random Graphs Models (ERGM) are common, simple statistical models for social network and other network structures. Unfortunately, inference and learning with them is hard even for small networks because their partition functions are intractable for precise computation. In this paper, we introduce a new quadratic time deterministic approximation to these partition functions. Our main insight enabling this advance is that subgraph statistics is sufficient to derive a lower bound for partition functions given that the model is not dominated by a few graphs. The proposed method differs from existing methods in its ways of exploiting asymptotic properties of subgraph statistics. Compared to the current Monte Carlo simulation based methods, the new method is scalable, stable, and precise enough for inference tasks.

AAAI Conference 2015 Conference Paper

Learning Relational Kalman Filtering

  • Jaesik Choi
  • Eyal Amir
  • Tianfang Xu
  • Albert Valocchi

The Kalman Filter (KF) is pervasively used to control a vast array of consumer, health and defense products. By grouping sets of symmetric state variables, the Relational Kalman Filter (RKF) enables us to scale the exact KF for large-scale dynamic systems. In this paper, we provide a parameter learning algorithm for RKF, and a regrouping algorithm that prevents the degeneration of the relational structure for efficient filtering. The proposed algorithms significantly expand the applicability of the RKFs by solving the following questions: (1) how to learn parameters for RKF from partial observations; and (2) how to regroup the degenerated state variables by noisy real-world observations. To our knowledge, this is the first paper on learning parameters in relational continuous probabilistic models. We show that our new algorithms significantly improve the accuracy and the efficiency of filtering large-scale dynamic systems.

KR Conference 2014 Short Paper

Tracking Beliefs and Intentions in the Werewolf Game

  • Codruta Girlea
  • Eyal Amir
  • Roxana Girju

We present a new approach to Belief Revision in Situation Calculus. We overcome the need to represent perlocutions by assuming an ’observation model’, describing what beliefs, intentions, and unknown properties the utterances expose. The agents’ belief states are then filtered (Shirazi and Amir 2011) with the observed utterances, resulting in an updated Kripke structure. Our new approach allows us to describe dialogues combined with actions. From a Kriple Structure perspective, we add an observation model of utterances, and a revision model, accounting for how the utterances are used to change beliefs. From a Situation Calculus perspective, we model beliefs over beliefs, belief change, and perlocutionary effects. From a Belief Revision perspective, we represent how utterances affect dialogue participants on an individual level, depeding on each agents’ internal state. Our model does not use any linguistic input. We assume the utterances are already parsed to logical formulas that encode both their propositional content and the speech act they function as. Utterances expressed in natural language may provide further insight into the agents’ beliefs, encoded in presuppositions and propositional attitudes. We plan to embed those phenomena in our model as future work. We propose a model of belief and intention change over the course of a dialogue, in the case where the decisions taken during the dialogue affect the possibly conflicting goals of the agents involved. We use Situation Calculus to model the evolution of the world and an observation model to analyze the evolution of intentions and beliefs. In our formalization, utterances, that only change the beliefs and intentions, are observations. We illustrate our formalization with the game of Werewolf.

AAAI Conference 2012 Conference Paper

Identifying Bullies with a Computer Game

  • Juan Mancilla-Caceres
  • Wen Pu
  • Eyal Amir
  • Dorothy Espelage

Current computer involvement in adolescent social networks (youth between the ages of 11 and 17) provides new opportunities to study group dynamics, interactions amongst peers, and individual preferences. Nevertheless, most of the research in this area focuses on efficiently retrieving information that is explicit in large social networks (e. g. , properties of the graph structure), but not on how to use the dynamics of the virtual social network to discover latent characteristics of the realworld social network. In this paper, we present the analysis of a game designed to take advantage of the familiarity of adolescents with online social networks, and describe how the data generated by the game can be used to identify bullies in 5th grade classrooms. We present a probabilistic model of the game and using the in-game interactions of the players (i. e. , content of chat messages) infer their social role within their classroom (either a bully or non-bully). The evaluation of our model is done by using previously collected data from psychological surveys on the same 5th grade population and by comparing the performance of the new model with off-the-shelf classifiers.

AAAI Conference 2012 Conference Paper

Information Set Generation in Partially Observable Games

  • Mark Richards
  • Eyal Amir

We address the problem of making single-point decisions in large partially observable games, where players interleave observation, deliberation, and action. We present information set generation as a key operation needed to reason about games in this way. We show how this operation can be used to implement an existing decision-making algorithm. We develop a constraint satisfaction algorithm for performing information set generation and show that it scales better than the existing depth-first search approach on multiple non-trivial games.

UAI Conference 2012 Conference Paper

Lifted Relational Variational Inference

  • Jaesik Choi
  • Eyal Amir

Hybrid continuous-discrete models naturally represent many real-world applications in robotics, finance, and environmental engineering. Inference with large-scale models is challenging because relational structures deteriorate rapidly during inference with observations. The main contribution of this paper is an efficient relational variational inference algorithm that factors largescale probability models into simpler variational models, composed of mixtures of iid (Bernoulli) random variables. The algorithm takes probability relational models of largescale hybrid systems and converts them to a close-to-optimal variational models. Then, it efficiently calculates marginal probabilities on the variational models by using a latent (or lifted) variable elimination or a lifted stochastic sampling. This inference is unique because it maintains the relational structure upon individual observations and during inference steps.

AIJ Journal 2011 Journal Article

First-order logical filtering

  • Afsaneh Shirazi
  • Eyal Amir

Logical filtering is the process of updating a belief state (set of possible world states) after a sequence of executed actions and perceived observations. In general, it is intractable in dynamic domains that include many objects and relationships. Still, potential applications for such domains (e. g. , semantic web, autonomous agents, and partial-knowledge games) encourage research beyond intractability results. In this paper we present polynomial-time algorithms for filtering belief states that are encoded as First-Order Logic (FOL) formulas. Our algorithms are exact in many cases of interest. They accept belief states in FOL without functions, permitting arbitrary arity for predicates, infinite universes of elements, and equality. They enable natural representation with explicit references to unidentified objects and partially known relationships, still maintaining tractable computation. Previous results focus on more general cases that are intractable or permit only imprecise filtering. Our algorithms guarantee that belief-state representation remains compact for STRIPS actions (among others) with unbounded-size domains. This guarantees tractable exact filtering indefinitely for those domains. The rest of our results apply to expressive modeling languages, such as partial databases and belief revision in FOL.

IJCAI Conference 2011 Conference Paper

Lifted Relational Kalman Filtering

  • Jaesik Choi
  • Abner Guzman-Rivera
  • Eyal Amir

Kalman Filtering is a computational tool with widespread applications in robotics, financial and weather forecasting, environmental engineering and defense. Given observation and state transition models, the Kalman Filter (KF) recursively estimates the state variables of a dynamic system. However, the KF requires a cubic time matrix inversion operation at every timestep which prevents its application in domains with large numbers of state variables. We propose Relational Gaussian Models to represent and model dynamic systems with large numbers of variables efficiently. Furthermore, we devise an exact lifted Kalman Filtering algorithm which takes only linear time in the number of random variables at every timestep. We prove that our algorithm takes linear time in the number of state variables even when individual observations apply to each variable. To our knowledge, this is the first lifted (linear time) algorithm for filtering with continuous dynamic relational models.

UAI Conference 2010 Conference Paper

Lifted Inference for Relational Continuous Models

  • Jaesik Choi
  • Eyal Amir
  • David J. Hill 0002

Relational Continuous Models (RCMs) represent joint probability densities over attributes of objects, when the attributes have continuous domains. With relational representations, they can model joint probability distributions over large numbers of variables compactly in a natural way. This paper presents a new exact lifted inference algorithm for RCMs, thus it scales up to large models of real world applications. The algorithm applies to Relational Pairwise Models which are (relational) products of potentials of arity 2. Our algorithm is unique in two ways. First, it substantially improves the efficiency of lifted inference with variables of continuous domains. When a relational model has Gaussian potentials, it takes only linear-time compared to cubic time of previous methods. Second, it is the first exact inference algorithm which handles RCMs in a lifted way. The algorithm is illustrated over an example from econometrics. Experimental results show that our algorithm outperforms both a groundlevel inference algorithm and an algorithm built with previously-known lifted methods.

KR Conference 2010 Conference Paper

Reasonig about Deterministic Actions with Probabilistic Prior and Application to Stochastic Filtering

  • Hannaneh Hajishirzi
  • Eyal Amir

In recent years, there is a growing interest in using actionWe present a novel algorithm and a new understanding of reasoning about a sequence of deterministic actions with a probabilistic prior. When the initial state of a dynamic system is unknown, a probability distribution can be still specified over the initial states. Estimating the posterior distribution over states (filtering) after some deterministic actions occurred is a problem relevant to AI planning, natural language processing (NLP), and robotics among others. Current approaches to filtering deterministic actions are not tractable even if the distribution over the initial system state is represented compactly. The reason is that state variables become correlated after a few steps. The main innovation in this paper is a method for sidestepping this problem by redefining state variables dynamically at each time step such that the posterior for time t is represented in a factored form. This update is done using a progression algorithm as a subroutine, and our algorithm’s tractability follows when that subroutine is tractable. Our results are for general deterministic actions and in particular, our algorithm is tractable for one-to-one and STRIPS actions. We apply our reasoning algorithm about deterministic actions to reasoning about sequences of probabilistic actions and improve the efficiency of the current probabilistic reasoning approaches. We demonstrate the efficiency of the new algorithm empirically over AI-Planning data sets.

IJCAI Conference 2009 Conference Paper

  • Hannaneh Hajishirzi
  • Afsaneh Shirazi
  • Jaesik Choi
  • Eyal Amir

In many real-world situations we are charged with detecting change as soon as possible. Important examples include detecting medical conditions, detecting security breaches, and updating caches of distributed databases. In those situations, sensing can be expensive, but it is also important to detect change in a timely manner. In this paper we present tractable greedy algorithms and prove that they solve this decision problem either optimally or approximate the optimal solution in many cases. Our problem model is a POMDP that includes a cost for sensing, a cost for delayed detection, a reward for successful detection, and no-cost partial observations. Making optimal decisions is difficult in general. We show that our tractable greedy approach finds optimal policies for sensing both a single variable and multiple correlated variables. Further, we provide approximations for the optimal solution to multiple hidden or observed variables per step. Our algorithms outperform previous algorithms in experiments over simulated data and live Wikipedia WWW pages.

ICRA Conference 2009 Conference Paper

Combining planning and motion planning

  • Jaesik Choi
  • Eyal Amir

Robotic manipulation is important for real, physical world applications. General Purpose manipulation with a robot (eg. delivering dishes, opening doors with a key, etc.) is demanding. It is hard because (1) objects are constrained in position and orientation, (2) many non-spatial constraints interact (or interfere) with each other, and (3) robots may have multi-degree of freedoms (DOF). In this paper we solve the problem of general purpose robotic manipulation using a novel combination of planning and motion planning. Our approach integrates motions of a robot with other (non-physical or external-to-robot) actions to achieve a goal while manipulating objects. It differs from previous, hierarchical approaches in that (a) it considers kinematic constraints in configuration space (C-space) together with constraints over object manipulations; (b) it automatically generates high-level (logical) actions from a C-space based motion planning algorithm; and (c) it decomposes a planning problem into small segments, thus reducing the complexity of planning.

NeSy Conference 2008 Conference Paper

Hybrid Classification and Symbolic-Like Manipulation Using Self-Regulatory Feedback Networks

  • Tsvi Achler
  • Eyal Amir

We propose a hybrid model based on self-regulatory feedback. It is a connectionist network, which can be composed in a modular fashion. It can be composed of simple constructs that can be combined together to interact logically without requiring apriori declaration of ‘interaction-type’ connections. It is primarily a classifier but resolves binding and is pliable to certain symbolic manipulations. We show that 1) compositions can simply be combined to create a hybrid-recognition/symbolic network. 2) Demonstrate how these compositions perform binding logic. 3) Lastly, show how representations can be manipulated in a symbolic-like fashion. These properties are an integral part of intelligent inference and such networks provide a new direction for future research.

UAI Conference 2008 Conference Paper

Sampling First Order Logical Particles

  • Hannaneh Hajishirzi
  • Eyal Amir

Approximate inference in dynamic systems is the problem of estimating the state of the system given a sequence of actions and partial observations. High precision estimation is fundamental in many applications like diagnosis, natural language processing, tracking, planning, and robotics. In this paper we present an algorithm that samples possible deterministic executions of a probabilistic sequence. The algorithm takes advantage of a compact representation (using first order logic) for actions and world states to improve the precision of its estimation. Theoretical and empirical results show that the algorithm’s expected error is smaller than propositional sampling and Sequential Monte Carlo (SMC) sampling techniques.

IJCAI Conference 2007 Conference Paper

  • Deepak Ramachandran
  • Eyal Amir

Inverse Reinforcement Learning (IRL) is the problem of learning the reward function underlying a Markov Decision Process given the dynamics of the system and the behaviour of an expert. IRL is motivated by situations where knowledge of the rewards is a goal by itself (as in preference elicitation) and by the task of apprenticeship learning (learning policies from an expert). In this paper we show how to combine prior knowledge and evidence from the expert's actions to derive a probability distribution over the space of reward functions. We present efficient algorithms that find solutions for the reward learning and apprenticeship learning tasks that generalize well over these distributions. Experimental results show strong improvement for our methods over previous heuristic-based approaches.

IJCAI Conference 2007 Conference Paper

  • Dafna Shahaf
  • Eyal Amir

Logical Filtering is the problem of tracking the possible states of a world (belief state) after a sequence of actions and observations. It is fundamental to applications in partially observable dynamic domains. This paper presents the first exact logical filtering algorithm that is tractable for all deterministic domains. Our tractability result is interesting because it contrasts sharply with intractability results for structured stochastic domains. The key to this advance lies in using logical circuits to represent belief states. We prove that both filtering time and representation size are linear in the sequence length and the input size. They are independent of the domain size if the actions have compact representations. The number of variables in the resulting formula is at most the number of state features. We also report on a reasoning algorithm (answering propositional questions) for our circuits, which can handle questions about past time steps (smoothing). We evaluate our algorithms extensively on AI-planning domains. Our method outperforms competing methods, sometimes by orders of magnitude.

IJCAI Conference 2007 Conference Paper

  • Mark Richards
  • Eyal Amir

Computers have already eclipsed the level of human play in competitive Scrabble, but there remains room for improvement. In particular, there is much to be gained by incorporating information about the opponent's tiles into the decision-making process. In this work, we quantify the value of knowing what letters the opponent has. We use observations from previous plays to predict what tiles our opponent may hold and then use this information to guide our play. Our model of the opponent, based on Bayes' theorem, sacrifices accuracy for simplicity and ease of computation. But even with this simplified model, we show significant improvement in play over an existing Scrabble program. These empirical results suggest that this simple approximation may serve as a suitable substitute for the intractable partially observable Markov decision process. Although this work focuses on computer-vs-computer Scrabble play, the tools developed can be of great use in training humans to play against other humans.

IROS Conference 2007 Conference Paper

Factor-guided motion planning for a robot arm

  • Jaesik Choi
  • Eyal Amir

Motion planning for robotic arms is important for real, physical world applications. The planning for arms with high-degree-of-freedom (DOF) is hard because its search space is large (exponential in the number of joints), and the links may collide with static obstacles or other joints (self-collision). In this paper we present a motion planning algorithm that finds plans of motion from one arm configuration to a goal arm configuration in 2D space assuming no self-collision. Our algorithm is unique in two ways: (a) it utilizes the topology of the arm and obstacles to factor the search space and reduce the complexity of the planning problem using dynamic programming; and (b) it takes only polynomial time in the number of joints under some conditions. We provide a sufficient condition for polytime motion planning for 2D-space arms: if there is a path between two homotopic configurations, an embedded local planner finds a path within a polynomial time. The experimental results show that the proposed algorithm improves the performance of path planning for 2D arms.

UAI Conference 2007 Conference Paper

Reachability Under Uncertainty

  • Allen Chang
  • Eyal Amir

In this paper we introduce a new network reachability problem where the goal is to find the most reliable path between two nodes in a network, represented as a directed acyclic graph. Individual edges within this network may fail according to certain probabilities, and these failure probabilities may depend on the values of one or more hidden variables. This problem may be viewed as a generalization of shortest-path problems for finding minimum cost paths or Viterbi-type problems for finding highest-probability sequences of states, where the addition of the hidden variables introduces correlations that are not handled by previous algorithms. We give theoretical results characterizing this problem including an NP-hardness proof. We also give an exact algorithm and a more efficient approximation algorithm for this problem.

ICAPS Conference 2006 Conference Paper

Goal Achievement in Partially Known, Partially Observable Domains

  • Allen Chang
  • Eyal Amir

We present a decision making algorithm for agents that act in partially observable domains which they do not know fully. Making intelligent choices in such domains is very difficult because actions' effects may not be known a priori (partially known domain), and features may not always be visible (partially observable domain). Nonetheless, we show that an efficient solution is achievable in STRIPS domains by using traditional planning methods. This solution interleaves planning and execution carefully. Computing each plan takes time that is linear in the planning time for the fully observable, fully known domain. The number of actions that it executes is bounded by a polynomial in the length of the optimal plan in the fully observable, fully known domain. Our theoretical results and preliminary experiments demonstrate the effectiveness of the algorithm.

IJCAI Conference 2005 Conference Paper

Lifted First-Order Probabilistic Inference

  • Rodrigo de Salvo Braz
  • Eyal Amir
  • Dan

Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the first-order level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a first-order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference.

AIJ Journal 2005 Journal Article

Partition-based logical reasoning for first-order and propositional theories

  • Eyal Amir
  • Sheila McIlraith

In this paper we show how tree decomposition can be applied to reasoning with first-order and propositional logic theories. Our motivation is two-fold. First, we are concerned with how to reason effectively with multiple knowledge bases that have overlap in content. Second, we are concerned with improving the efficiency of reasoning over a set of logical axioms by partitioning the set with respect to some detectable structure, and reasoning over individual partitions either locally or in a distributed fashion. To this end, we provide algorithms for partitioning and reasoning with related logical axioms in propositional and first-order logic. Many of the reasoning algorithms we present are based on the idea of passing messages between partitions. We present algorithms for both forward (data-driven) and backward (query-driven) message passing. Different partitions may have different associated reasoning procedures. We characterize a class of reasoning procedures that ensures completeness and soundness of our message-passing algorithms. We further provide a specialized algorithm for propositional satisfiability checking with partitions. Craig's interpolation theorem serves as a key to proving soundness and completeness of all of these algorithms. An analysis of these algorithms emphasizes parameters of the partitionings that influence the efficiency of computation. We provide a greedy algorithm that automatically decomposes a set of logical axioms into partitions, following this analysis.

AIJ Journal 2004 Journal Article

Logic-based subsumption architecture

  • Eyal Amir
  • Pedrito Maynard-Zhang

We describe a logic-based AI architecture based on Brooks' subsumption architecture. In this architecture, we axiomatize different layers of control in First-Order Logic (FOL) and use independent theorem provers to derive each layer's outputs given its inputs. We implement the subsumption of lower layers by higher layers using nonmonotonic reasoning principles. In particular, we use circumscription to make default assumptions in lower layers, and nonmonotonically retract those assumptions when higher layers draw new conclusions. We also give formal semantics to our approach. Finally, we describe layers designed for the task of robot control and a system that we have implemented that uses this architecture for the control of a Nomad 200 mobile robot. Our system combines the virtues of using the represent-and-reason paradigm and the behavioral-decomposition paradigm. It allows multiple goals to be serviced simultaneously and reactively. It also allows high-level tasks and is tolerant to different changes and elaborations of its knowledge in runtime. Finally, it allows us to give more commonsense knowledge to robots. We report on several experiments that empirically show the feasibility of using fully expressive FOL theorem provers for robot control with our architecture and the benefits claimed above.

IJCAI Conference 2003 Conference Paper

Practical Partition-Based Theorem Proving for Large Knowledge Bases

  • BillMacCartney
  • Sheila Mcllraith
  • Eyal Amir
  • Tomds E. Uribe

In previous work, Levesque proposed an extension to classical databases that would allow for a certain form of incomplete first-order knowledge. Since this extension was sufficient to make full logical de­ duction undecidable, he also proposed an alterna­ tive reasoning scheme with desirable logical prop­ erties. He also claimed (without proof) that this reasoning could be implemented efficiently using database techniques such as projections and joins. In this paper, we substantiate this claim and show how to adapt a bottom-up database query evalu­ ation algorithm for this purpose, thus obtaining a tractability result comparable to those that exist for databases.

NMR Workshop 2002 Conference Paper

Interpolation theorems for nonmonotonic reasoning systems

  • Eyal Amir

Craig’s interpolation theorem (Craig, 1957] is an important theorem known for propositional logic and first-order logic. It says that if a logical formula # logically follows from a formula a, then there is a formula ¥, including only symbols that appear in both a, 8, such that 2 logically follows from ¥ and ¥ logically follows from a. Such theorems are important and useful for understanding those logics in which they hold as well as for speeding up reasoning with theories in those logics. In this paper we present interpolation theorems in this spirit for three nonmonotonic systems: circumscription, default logic and logic programs with the stable models semantics (a.k.a. answer set semantics). These results give us better understanding of those logics, especially in contrast to their nonmonotonic characteristics. They suggest that some monotonicity principle holds despite the failure of classic monotonicity for these logics. Also, they sometimes allow us to use methods for the decomposition of reasoning for these systems, possibly increasing their applicability and tractability. Finally, they allow us to build structured representations that use those logics.

JELIA Conference 2002 Conference Paper

Interpolation Theorems for Nonmonotonic Reasoning Systems

  • Eyal Amir

Abstract Craig’s interpolation theorem [ 3 ] is an important theorem known for propositional logic and first-order logic. It says that if a logical formula ß logically follows from a formula ∞, then there is a formula γ, including only symbols that appear in both ∞, ß, such that ß logically follows from γ and γ logically follows from ∞. Such theorems are important and useful for understanding those logics in which they hold as well as for speeding up reasoning with theories in those logics. In this paper we present interpolation theorems in this spirit for three nonmonotonic systems: circumscription, default logic and logic programs with the stable models semantics (a. k. a. answer set semantics). These results give us better understanding of those logics, especially in contrast to their nonmonotonic characteristics. They suggest that some monotonicity principle holds despite the failure of classic monotonicity for these logics. Also, they sometimes allow us to use methods for the decomposition of reasoning for these systems, possibly increasing their applicability and tractability. Finally, they allow us to build structured representations that use those logics.

AAAI Conference 2000 Conference Paper

(De)Composition of Situation Calculus Theories

  • Eyal Amir

We show that designing large situation calculus theories can be simplified by using object-oriented techniques and tools together with established solutions to the frame problem. Situation calculus (McCarthy & Hayes 1969) is one of the leading logical representations for action and change, but large situation calculus theories are not easy to design and maintain, nor are they flexible for extension or reuse. However, we wish to use it to represent large, complex domains. To address this problem, we apply our proposed methodology to situation calculus theories and analyze the composition of theories in this light. The object-oriented tools that we use do not change the semantics of situation calculus, so all the original situation calculus results apply in our setting and vice versa. We get two additional results from this approach. First, we offer a new treatment of loosely interacting agents that uses situation calculus without abandoning the result formalism. This treatment allows a theory-builder to construct a theory without considering its potential inclusion in a multiple-agents setup. Second, theories that we build in this way admit specialized reasoning algorithms.

AAAI Conference 1999 Short Paper

Elaboration Tolerance of Logical Theories

  • Eyal Amir
  • Stanford University

We consider the development and modification of logical theories (e.g., commonsense theories). During development of such knowledge bases (KBs) a knowledge engineer makes some design and modeling choices. These decisions may later force the KB to undergo some redesign and rewriting when new knowledge needs to be integrated. We then say that the KB lacks Elaboration Tolerance. McCarthy illustrated this problem using example elaborations for the toy problem of the Missionaries and Cannibals.

IJCAI Conference 1999 Conference Paper

Logic-Based Subsumption Architecture

  • Eyal Amir
  • Pedrito Maynard-Reid II

We describe a logic-based AI architecture based on Brooks' subsumption architecture. In this architecture, we axiomatize different layers of control in First-Order Logic (FOL) and use independent theorem provers to derive each layer's outputs given its inputs. We implement the subsumption of lower layers by higher layers using circumscription to make assumptions in lower layers, and nonmonotonically retract them when higher layers draw new conclusions. WT e also give formal semantics to our approach. Finally, we describe four layers designed for the task of robot control and an experiment that empirically shows the feasibility of using fully expressive FOL theorem provers for robot control with our architecture.