top of page

Detecting imagined speech events from brain signals

By: Aurélie de Borman, Benjamin Wittevrongel, Ine Dauwe, Evelien Carrette, Alfred Meurs, Dirk Van Roost, Paul Boon & Marc M. Van Hulle


Across the world, millions of people suffer from speech disorders. In the case of anarthria, individuals are unable to articulate speech despite being able to understand language and form words or sentences mentally. Communication can become an even greater challenge when other motor functions are also impaired. For example, patients who suffer from amyotrophic lateral sclerosis (ALS) gradually lose control of their muscles, including the muscles of the face. They become unable to speak but also unable to use a keyboard or any other simple communication alternative, while their cognitive functions are still intact. A speech neuroprosthesis is a device that would translate brain signals directly into speech signals, bypassing impaired motor functions. The activity of the brain is recorded and given as input to a model that outputs the intended speech signal (a sound or a text).


In this study, brain signals were recorded using electrocorticography. This is an invasive type of electroencephalography (EEG) that requires surgery to place the electrodes on the surface of the brain, underneath the skull. Since the surgery presents a risk, we perform experiments with patients who suffer from epilepsy and have electrodes implanted for clinical purposes (i.e., finding the origin of their seizures). These patients participate voluntarily in experiments.

 

Participants were asked to speak, listen, and imagine they speak while their brain signals were recorded. Imagining speaking is particularly interesting since it gets closer to the condition of someone who would not be able to speak. However, imagined speech is challenging since nothing visible tells us when a participant is imagining speaking. Therefore, a first step is to detect, from the brain signals, imagined speech events. This would enable turning on and off a speech neuroprosthesis and avoiding spurious output.

We developed a computational model to detect speech events from the brain signals. This allowed us to analyze which brain regions and frequency bands were involved during speech. Experiments were performed with 16 participants, with a total of 588 electrodes. Given the large number of electrodes, the use of the VSC infrastructure significantly sped up the analysis. We were able to conduct extensive analyses at the electrode level.


Given the large number of electrodes, the use of the VSC infrastructure significantly sped up the analysis. We were able to conduct extensive analyses at the electrode level.

Ten out of the 16 participants could detect imagined speech events. We observed a change of activity in the temporal region and the motor cortex. Similar patterns were observed during the different speech modes (speaking, listening, and imagining speaking), although imagined speech showed the weakest activation. The activity changes were the most similar across the speech modes in the motor cortex. We also transferred models between the speech modes and the participants.



Each pie chart represents an electrode, with sections proportional to the speech detection performance for each speech mode (red = speaking, blue = listening, green = imagining speaking). Imagined speech is the most challenging mode to detect. 

 


 

Reference:

de Borman, A., Wittevrongel, B., Dauwe, I. et al. Imagined speech event detection from electrocorticography and its transfer between speech modes and subjects. Commun Biol 7, 818 (2024). Read the full publication in Nature's Communications Biology here.

bottom of page