Arrow Research search

Author name cluster

Jonathan Grizou

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

NeurIPS Conference 2025 Conference Paper

Self-Calibrating BCIs: Ranking and Recovery of Mental Targets Without Labels

  • Jonathan Grizou
  • Carlos De la Torre-Ortiz
  • Tuukka Ruotsalo

We consider the problem of recovering a mental target (e. g. , an image of a face) that a participant has in mind from paired EEG (i. e. , brain responses) and image (i. e. , perceived faces) data collected during interactive sessions without access to labeled information. The problem has been previously explored with labeled data but not via self-calibration, where labeled data is unavailable. Here, we present the first framework and an algorithm, CURSOR, that learns to recover unknown mental targets without access to labeled data or pre-trained decoders. Our experiments on naturalistic images of faces demonstrate that CURSOR can (1) predict image similarity scores that correlate with human perceptual judgments without any label information, (2) use these scores to rank stimuli against an unknown mental target, and (3) generate new stimuli indistinguishable from the unknown mental target (validated via a user study, N=53). We release the brain response data set (N=29), associated face images used as stimuli data, and a codebase to initiate further research on this novel task.

ICRA Conference 2024 Conference Paper

TELESIM: A Modular and Plug-and-Play Framework for Robotic Arm Teleoperation using a Digital Twin

  • Florent P. Audonnet
  • Jonathan Grizou
  • Andrew Hamilton
  • Gerardo Aragon-Camarasa

Teleoperating robotic arms can be a challenging task for non-experts, particularly when using complex control devices or interfaces. To address the limitations and challenges of existing teleoperation frameworks, such as cognitive strain, control complexity, robot compatibility, and user evaluation, we propose TELESIM, a modular and plug-and-play framework that enables direct teleoperation of any robotic arm using a digital twin as the interface between users and the robotic system. Due to TELESIM’s modular design, it is possible to control the digital twin using any device that outputs a 3D pose, such as a virtual reality controller or a finger-mapping hardware controller. To evaluate the efficacy and user-friendliness of TELESIM, we conducted a user study with 37 participants. The study involved a simple pick-and-place task, which was performed using two different robots equipped with two different control modalities. Our experimental results show that most users were able to succeed by building at least a tower of 3 cubes in 10 minutes, with only 5 minutes of training beforehand, regardless of the control modality or robot used, demonstrating the usability and user-friendliness of TELESIM.

IJCAI Conference 2019 Conference Paper

The Open Vault Challenge - Learning How to Build Calibration-Free Interactive Systems by Cracking the Code of a Vault

  • Jonathan Grizou

This demo takes the form of a challenge to the IJCAI community. A physical vault, secured by a 4-digit code, will be placed in the demo area. The author will publicly open the vault by entering the code on a touch-based interface, and as many times as requested. The challenge to the IJCAI participants will be to crack the code, open the vault, and collect its content. The interface is based on previous work on calibration-free interactive systems that enables a user to start instructing a machine without the machine knowing how to interpret the user’s actions beforehand. The intent and the behavior of the human are simultaneously learned by the machine. An online demo and videos are available for readers to participate in the challenge. An additional interface using vocal commands will be revealed on the demo day, demonstrating the scalability of our approach to continuous input signals.

IROS Conference 2015 Conference Paper

Facilitating intention prediction for humans by optimizing robot motions

  • Freek Stulp
  • Jonathan Grizou
  • Baptiste Busch
  • Manuel Lopes 0001

Members of a team are able to coordinate their actions by anticipating the intentions of others. Achieving such implicit coordination between humans and robots requires humans to be able to quickly and robustly predict the robot's intentions, i. e. the robot should demonstrate a behavior that is legible. Whereas previous work has sought to explicitly optimize the legibility of behavior, we investigate legibility as a property that arises automatically from general requirements on the efficiency and robustness of joint human-robot task completion. We do so by optimizing fast and successful completion of joint human-robot tasks through policy improvement with stochastic optimization. Two experiments with human subjects show that robots are able to adapt their behavior so that humans become better at predicting the robot's intentions early on, which leads to faster and more robust overall task completion.

AAAI Conference 2014 Conference Paper

Calibration-Free BCI Based Control

  • Jonathan Grizou
  • Iñaki Iturrate
  • Luis Montesano
  • Pierre-Yves Oudeyer
  • Manuel Lopes

Recent works have explored the use of brain signals to directly control virtual and robotic agents in sequential tasks. So far in such brain-computer interfaces (BCI), an explicit calibration phase was required to build a decoder that translates raw electroencephalography (EEG) signals from the brain of each user into meaningful instructions. This paper proposes a method that removes the calibration phase, and allows a user to control an agent to solve a sequential task. The proposed method assumes a distribution of possible tasks, and infers the interpretation of EEG signals and the task by selecting the hypothesis which best explains the history of interaction. We introduce a measure of uncertainty on the task and on the EEG signal interpretation to act as an exploratory bonus for a planning strategy. This speeds up learning by guiding the system to regions that better disambiguate among task hypotheses. We report experiments where four users use BCI to control an agent on a virtual world to reach a target without any previous calibration process.

UAI Conference 2014 Conference Paper

Interactive Learning from Unlabeled Instructions

  • Jonathan Grizou
  • Iñaki Iturrate
  • Luis Montesano
  • Pierre-Yves Oudeyer
  • Manuel Lopes 0001

Interactive learning deals with the problem of learning and solving tasks using human instructions. It is common in human-robot interaction, tutoring systems, and in human-computer interfaces such as brain-computer ones. In most cases, learning these tasks is possible because the signals are predefined or an ad-hoc calibration procedure allows to map signals to specific meanings. In this paper, we address the problem of simultaneously solving a task under human feedback and learning the associated meanings of the feedback signals. This has important practical application since the user can start controlling a device from scratch, without the need of an expert to define the meaning of signals or carrying out a calibration phase. The paper proposes an algorithm that simultaneously assign meanings to signals while solving a sequential task under the assumption that both, human and machine, share the same a priori on the possible instruction meanings and the possible tasks. Furthermore, we show using synthetic and real EEG data from a brain-computer interface that taking into account the uncertainty of the task and the signal is necessary for the machine to actively plan how to solve the task efficiently.