YNIMG Journal 2018 Journal Article
Human EEG reveals distinct neural correlates of power and precision grasping types
- Iñaki Iturrate
- Ricardo Chavarriaga
- Michael Pereira
- Huaijian Zhang
- Tiffany Corbet
- Robert Leeb
- José del R. Millán
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
YNIMG Journal 2018 Journal Article
YNIMG Journal 2018 Journal Article
AAAI Conference 2014 Conference Paper
Recent works have explored the use of brain signals to directly control virtual and robotic agents in sequential tasks. So far in such brain-computer interfaces (BCI), an explicit calibration phase was required to build a decoder that translates raw electroencephalography (EEG) signals from the brain of each user into meaningful instructions. This paper proposes a method that removes the calibration phase, and allows a user to control an agent to solve a sequential task. The proposed method assumes a distribution of possible tasks, and infers the interpretation of EEG signals and the task by selecting the hypothesis which best explains the history of interaction. We introduce a measure of uncertainty on the task and on the EEG signal interpretation to act as an exploratory bonus for a planning strategy. This speeds up learning by guiding the system to regions that better disambiguate among task hypotheses. We report experiments where four users use BCI to control an agent on a virtual world to reach a target without any previous calibration process.
UAI Conference 2014 Conference Paper
Interactive learning deals with the problem of learning and solving tasks using human instructions. It is common in human-robot interaction, tutoring systems, and in human-computer interfaces such as brain-computer ones. In most cases, learning these tasks is possible because the signals are predefined or an ad-hoc calibration procedure allows to map signals to specific meanings. In this paper, we address the problem of simultaneously solving a task under human feedback and learning the associated meanings of the feedback signals. This has important practical application since the user can start controlling a device from scratch, without the need of an expert to define the meaning of signals or carrying out a calibration phase. The paper proposes an algorithm that simultaneously assign meanings to signals while solving a sequential task under the assumption that both, human and machine, share the same a priori on the possible instruction meanings and the possible tasks. Furthermore, we show using synthetic and real EEG data from a brain-computer interface that taking into account the uncertainty of the task and the signal is necessary for the machine to actively plan how to solve the task efficiently.
ICRA Conference 2010 Conference Paper
Reinforcement learning algorithms have been successfully applied in robotics to learn how to solve tasks based on reward signals obtained during task execution. These reward signals are usually modeled by the programmer or provided by supervision. However, there are situations in which this reward is hard to encode, and so would require a supervised approach of reinforcement learning, where a user directly types the reward on each trial. This paper proposes to use brain activity recorded by an EEG-based BCI system as reward signals. The idea is to obtain the reward from the activity generated while observing the robot solving the task. This process does not require an explicit model of the reward signal. Moreover, it is possible to capture subjective aspects which are specific to each user. To achieve this, we designed a new protocol to use brain activity related to the correct or wrong execution of the task. We showed that it is possible to detect and classify different levels of error in single trials. We also showed that it is possible to apply reinforcement learning algorithms to learn new similar tasks using the rewards obtained from brain activity.
ICRA Conference 2009 Conference Paper
This paper describes a new non-invasive brain-actuated wheelchair that relies on a P300 neurophysiological protocol and automated navigation. In operation, the subject faces a screen with a real-time virtual reconstruction of the scenario, and concentrates on the area of the space to reach. A visual stimulation process elicits the neurological phenomenon and the EEG signal processing detects the target area. This target area represents a location that is given to the autonomous navigation system, which drives the wheelchair to the desired place while avoiding collisions with the obstacles detected by the laser scanner. The accuracy of the brain-computer interface is above 94% and the flexibility of the sensor-based motion system allows for navigation in non-prepared and populated scenarios. The prototype has been validated with five healthy subjects in three experimental sessions: screening (an analysis of three different interfaces and its implications on the performance of the users), virtual environment driving (training and instruction of the users) and driving sessions with the wheelchair (driving tests along pre-established circuits). On the basis of the results, this paper reports a technical evaluation of the device and a variability study. All the users were able to successfully use the device with relative ease showing a great adaptation.