Researchers at the University of Essex hope the project could lead to helping people with severe communication disabilities such as locked-in syndrome or stroke sufferers by decoding language signals within their brains through non-invasive techniques. Essex scientists wanted to find a less invasive way of decoding acoustic information from signals in the brain to identify and reconstruct a piece of music someone was listening to. Whilst there have been successful previous studies monitoring and reconstructing acoustic information from brain waves, many have used more invasive methods such as electrocortiography (ECoG) - which involves placing electrodes inside the skull to monitor the actual surface of the brain.
Researchers used a combination of two non-invasive methods - fMRI, which measures blood flow through the entire brain, and electroencephalogram (EEG), which measures what is happening in the brain in real time - to monitor a person’s brain activity whilst listening to a piece of music. Using a deep learning neural network model, the data was translated to reconstruct and identify the piece of music. Music is a complex acoustic signal, sharing many similarities with natural language, so the model could potentially be adapted to translate speech. The eventual goal of this strand of research would be to translate thought, which could offer an important aid in the future for people who struggle to communicate, such as those with locked-in syndrome.
More information:
https://www.essex.ac.uk/news/2023/01/19/decoding-brainwaves-to-identify-music-listened-to