Arrow Research search

Author name cluster

Anthony Thomas

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

NeurIPS Conference 2024 Conference Paper

Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps

  • Christopher J. Kymn
  • Sonia Mazelet
  • Anthony Thomas
  • Denis Kleyko
  • E. P. Frady
  • Friedrich T. Sommer
  • Bruno A. Olshausen

We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues represented by high-dimensional, complex-valued vectors. These are composed into a single vector representing position by a similarity-preserving, conjunctive vector-binding operation. Self-consistency between the vectors representing position and the individual residues is enforced by a modular attractor network whose modules correspond to the grid cell modules in entorhinal cortex. The vector binding operation can also be used to bind different contexts to spatial representations, yielding a model for entorhinal cortex and hippocampus. We provide model analysis of scaling, similarity preservation and convergence behavior as well as experiments demonstrating noise robustness, sub-integer resolution in representing position, and path integration. The model formalizes the computations in the cognitive map and makes testable experimental predictions.

JBHI Journal 2023 Journal Article

M2D2: Maximum-Mean-Discrepancy Decoder for Temporal Localization of Epileptic Brain Activities

  • Alireza Amirshahi
  • Anthony Thomas
  • Amir Aminifar
  • Tajana Rosing
  • David Atienza

Recent years have seen growing interest in leveraging deep learning models for monitoring epilepsy patients based on electroencephalographic (EEG) signals. However, these approaches often exhibit poor generalization when applied outside of the setting in which training data was collected. Furthermore, manual labeling of EEG signals is a time-consuming process requiring expert analysis, making fine-tuning patient-specific models to new settings a costly proposition. In this work, we propose the Maximum-Mean-Discrepancy Decoder (M2D2) for automatic temporal localization and labeling of seizures in long EEG recordings to assist medical experts. We show that M2D2 achieves 76. 0% and 70. 4% of F1-score for temporal localization when evaluated on EEG data gathered in a different clinical setting than the training data. The results demonstrate that M2D2 yields substantially higher generalization performance than other state-of-the-art deep learning-based approaches.

IJCAI Conference 2022 Conference Paper

A Theoretical Perspective on Hyperdimensional Computing (Extended Abstract)

  • Anthony Thomas
  • Sanjoy Dasgupta
  • Tajana Rosing

Hyperdimensional (HD) computing is a set of neurally inspired methods for computing on high-dimensional, low-precision, distributed representations of data. These representations can be combined with simple, neurally plausible algorithms to effect a variety of information processing tasks. HD computing has recently garnered significant interest from the computer hardware community as an energy-efficient, low-latency, and noise-robust tool for solving learning problems. We present a novel mathematical framework that unifies analysis of HD computing architectures, and provides general, non-asymptotic, sufficient conditions under which HD information processing techniques will succeed.

JAIR Journal 2021 Journal Article

A Theoretical Perspective on Hyperdimensional Computing

  • Anthony Thomas
  • Sanjoy Dasgupta
  • Tajana Rosing

Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining highdimensional, low-precision, distributed representations of data. These representations can be combined with simple, neurally plausible algorithms to effect a variety of information processing tasks. HD computing has recently garnered significant interest from the computer hardware community as an energy-efficient, low-latency, and noise-robust tool for solving learning problems. In this review, we present a unified treatment of the theoretical foundations of HD computing with a focus on the suitability of representations for learning.