Hybrid brain-machine interfaces for natural neuroprosthetic control
Recent progress on the field of neuroprosthetics has made of them a promising assistive technology for motor substitution or as a rehabilitation tool for people with disabilities. However, several challenges need to be overcome to allow their use in practical applications. In particular, users should be able to control them in a reliable, intuitive manner or long periods of time without requiring long and repetitive calibration periods. This project tackles these challenges through the use of shared control for hybrid BMIs for the control of upper-limb neuroprostheses, in combination with semi-supervised learning.
We will improve the accuracy and temporal precision of reach and grasp motion by combining predictions from EEG, EMG, and gaze tracking signals. New decoding methods for this hybrid BMI will increase the overall system performance allowing the prosthesis to predict accurately the patient’s intention, while leveraging the patient’s residual control. This approach will be complemented by shared-control strategies, in which the user provides high-level commands to the device, which translates them into low-level commands by means of added artificial intelligence and sensor fusion. Moreover the use of error-related brain activity and inverse reinforcement learning will provide adaptation capabilities to the system.
This project thus will advance the state of the art on shared control by incorporating advanced robot-learning approaches based on semi-supervised techniques. The new neuroprosthetic framework will be thoroughly evaluated by end-users with upper-limb motor disabilities over several days to properly asses its suitability as a key component of practical daily living assistive applications.