Arrow Research search

Author name cluster

Berkeley

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
1 author row

Possible papers

6

RLDM Conference 2025 Conference Abstract

RLDM 2025 Abstract Booklet 158 The role of the thalamus in dynamic decision-making Caldinelli, C. ∗ Li, J. -J. Arcaro, M. J. Helen Wills Neuroscience Department Helen Wills Neuroscience Department Department of Psychology

  • Caldinelli
  • C. ∗ Li
  • J. -J. Arcaro
  • M. J.
  • CA 94720 Berkeley
  • CA 94720 Philadelphia
  • PA 19104
  • Collins

Booklet 158 The role of the thalamus in dynamic decision-making Caldinelli, C. ∗ Li, J. -J. Arcaro, M. J. Helen Wills Neuroscience Department Helen Wills Neuroscience Department Department of Psychology University of California, Berkeley University of California, Berkeley University of Pennsylvania Berkeley, CA 94720 Berkeley, CA 94720 Philadelphia, PA 19104 chiarac@berkeley. edu Collins, A. G. E Department of Psychology and Helen Wills Neuroscience Department University of California, Berkeley Berkeley, CA 94720 Abstract The thalamus lies in a pivotal position as an integrative hub for areas involved in high-level cognition. Despite its important theoretical role in many computational models and its involvement in psychiatric disorders, the human thalamus has not been studied much in the context of higher level cognition and cognitive flexibility. However, studies with non-human animals show involvement of the mediodorsal nucleus (MD) and the prefrontal cortex [1] during rule representation and switching, and a dysfunction of hub regions is associated with behavioral and cognitive impairment [2], [3]. We developed a novel behavioral task inspired by a previous rule switching task [4], [5] to probe rapid rule switching under uncertainty. Behavioral and computational modeling results show that during this task participants integrate reward uncertainty together with higher order task structure knowledge to efficiently explore the rule space. We use an extension of the PROBE model [5] to identify discrete transitions between periods of rule exploration and exploitation. 42 participants underwent an fMRI session performing a total the task. Preliminary results show an activation of the mediodorsal nucleus (MD) and the prefrontal cortex (PFC) during switch trials and rule exploration periods. Further analysis will confirm the association of MD with preferential coding of frequent rules. If the thalamus shows to be involved in a rule switching task, it would suggest a primary role in enabling the integration of information in dynamically changing and ambiguous environments such as solving a problem in the presence of uncertainty and updating the rules required to achieve a goal thanks to the extensive and reciprocal connections between the frontal and parietal cortices.

AAAI Conference 1999 Short Paper

A Bayesian Approach to Object Identification

  • Hanna Pasula
  • University of California
  • Berkeley

There are many real world domains where an agent can observe the world state only partially and intermittently, using noisy sensors. Merely keeping track of the objects present in such a system is non-trivial. The problem may be complicated further if the system dynamics are not fully known or unpredictable, so that some on-line learning is necessary. Ihave been working on a principled approach to state estimation and prediction under these realistic conditions. So far, I have focused mostly on object identification, deciding if some newly observed object is the same as a previously observed one. The work has been applied to the surveillance of a large metropolitan freeway system.

AAAI Conference 1999 Short Paper

Learning Form-Meaning Mappings for Language

  • Nancy Chang
  • University California
  • Berkeley
  • International Computer Science Institute

The proposed thesis research addresses two of the main obstacles to building agents that communicate using natural language: the need for richer representations of linguistic constructions that incorporate aspects of conceptual knowledge, context and goals; and the need for a principled approach to the automatic ac- quisition of such structures from examples. More generally, it explores the idea that patterns that arise in language are inextricably linked with and motivated by patterns of meaning and experience. This view, along with empirical evidence suggesting that linguistic knowledge at all levels can be characterized as mappings between form and meaning, serves as the basis for a computational model of the acquisition of simple phrasal and clausal constructions.

AAAI Conference 1999 Conference Paper

Moving Right Along: A Computational Model of Metaphoric Reasoning about Events

  • Srinivas Narayanan
  • University of California
  • Berkeley

This paper describes the results of an implemented computational model that cashes out the belief that reasoning about abstract events and actions relies on metaphoric projections of embodiedprimitives. The specific task addressed is the interpretation of simple causal narratives taken fromnewspaperarticles in the domains of Politics and Economics. Whenpresented with a surface-parsed version of these narratives as input, the systemdescribed is able to generate commonsense inferences consistent with the input.

AAAI Conference 1999 Short Paper

Towards Bounded Optimal Meta-Level Control: A Case Study

  • Daishi Harada
  • University of California
  • Berkeley

Suppose we allow the controller to perform arbitrary search, and to base its control on the backed up information. To do this, we need to make decisions about the following: the order in which search nodes are expanded, and when to stop searching and actually "commit" to a control. The approach that we take is to view these decisions as the meta-level control problem. With some care in the formulation, it can be seen that a solution to this meta-level control problem will provide us with a bounded optimal controller. We would like to solve this problem by using algorithms from reinforcement learning.